Test Report: KVM_Linux_crio 19696

                    
                      60137f5eb61dd17472aeb1c9d9b63bd7ae7f04e6:2024-09-24:36347
                    
                

Test fail (31/316)

Order failed test Duration
33 TestAddons/parallel/Registry 74.56
34 TestAddons/parallel/Ingress 155.39
36 TestAddons/parallel/MetricsServer 349.62
163 TestMultiControlPlane/serial/StopSecondaryNode 141.52
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.58
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.39
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 368.76
170 TestMultiControlPlane/serial/StopCluster 141.71
230 TestMultiNode/serial/RestartKeepsNodes 327.76
232 TestMultiNode/serial/StopMultiNode 144.6
239 TestPreload 184.21
247 TestKubernetesUpgrade 392.15
273 TestNoKubernetes/serial/StartNoArgs 29.36
288 TestStartStop/group/old-k8s-version/serial/FirstStart 272.4
296 TestStartStop/group/no-preload/serial/Stop 139.07
301 TestStartStop/group/embed-certs/serial/Stop 139.19
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.98
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
308 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
309 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 88.99
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
315 TestStartStop/group/old-k8s-version/serial/SecondStart 726.53
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.38
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.39
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.38
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.59
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 511.58
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 415.8
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 302.15
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 168.18
x
+
TestAddons/parallel/Registry (74.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.839777ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-h5ntb" [67fc5fdd-03ae-44c9-8e43-0042bd142349] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017944352s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dc579" [76bec57d-6868-4098-a291-8c38dda98afc] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003797134s
addons_test.go:338: (dbg) Run:  kubectl --context addons-823099 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-823099 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-823099 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.087382563s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-823099 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 ip
2024/09/23 23:49:57 [DEBUG] GET http://192.168.39.29:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-823099 -n addons-823099
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 logs -n 25: (1.632783888s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p download-only-098425                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-098425                                                                     | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | -p download-only-446089                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-446089                                                                     | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-098425                                                                     | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-446089                                                                     | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-013301 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | binary-mirror-013301                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39559                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-013301                                                                     | binary-mirror-013301 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-823099 --wait=true                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:48 UTC |
	|         | -p addons-823099                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:48 UTC |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | -p addons-823099                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-823099 ssh cat                                                                       | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | /opt/local-path-provisioner/pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-823099 ip                                                                            | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:38:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:38:22.858727   15521 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:38:22.858952   15521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:22.858959   15521 out.go:358] Setting ErrFile to fd 2...
	I0923 23:38:22.858964   15521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:22.859165   15521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:38:22.859782   15521 out.go:352] Setting JSON to false
	I0923 23:38:22.860641   15521 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1247,"bootTime":1727133456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:38:22.860727   15521 start.go:139] virtualization: kvm guest
	I0923 23:38:22.862749   15521 out.go:177] * [addons-823099] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:38:22.863989   15521 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:38:22.863991   15521 notify.go:220] Checking for updates...
	I0923 23:38:22.865162   15521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:38:22.866358   15521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:38:22.867535   15521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:22.868620   15521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:38:22.869743   15521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:38:22.870899   15521 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:38:22.903588   15521 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 23:38:22.904660   15521 start.go:297] selected driver: kvm2
	I0923 23:38:22.904673   15521 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:38:22.904687   15521 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:38:22.905400   15521 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:22.905500   15521 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:38:22.920929   15521 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:38:22.920979   15521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:38:22.921207   15521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:38:22.921237   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:38:22.921285   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:38:22.921293   15521 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:38:22.921344   15521 start.go:340] cluster config:
	{Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:22.921436   15521 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:22.923320   15521 out.go:177] * Starting "addons-823099" primary control-plane node in "addons-823099" cluster
	I0923 23:38:22.925095   15521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:22.925153   15521 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:38:22.925164   15521 cache.go:56] Caching tarball of preloaded images
	I0923 23:38:22.925267   15521 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 23:38:22.925281   15521 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 23:38:22.925621   15521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json ...
	I0923 23:38:22.925656   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json: {Name:mk1d938d4754f5dff88f0edaafe7f2a9698c52bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:22.925841   15521 start.go:360] acquireMachinesLock for addons-823099: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 23:38:22.925907   15521 start.go:364] duration metric: took 50.085µs to acquireMachinesLock for "addons-823099"
	I0923 23:38:22.926043   15521 start.go:93] Provisioning new machine with config: &{Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:38:22.926135   15521 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 23:38:22.928519   15521 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 23:38:22.928694   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:38:22.928738   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:38:22.943674   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0923 23:38:22.944239   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:38:22.944884   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:38:22.944906   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:38:22.945372   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:38:22.945633   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:22.945846   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:22.946076   15521 start.go:159] libmachine.API.Create for "addons-823099" (driver="kvm2")
	I0923 23:38:22.946111   15521 client.go:168] LocalClient.Create starting
	I0923 23:38:22.946149   15521 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0923 23:38:23.071878   15521 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0923 23:38:23.150247   15521 main.go:141] libmachine: Running pre-create checks...
	I0923 23:38:23.150273   15521 main.go:141] libmachine: (addons-823099) Calling .PreCreateCheck
	I0923 23:38:23.150796   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:23.151207   15521 main.go:141] libmachine: Creating machine...
	I0923 23:38:23.151222   15521 main.go:141] libmachine: (addons-823099) Calling .Create
	I0923 23:38:23.151379   15521 main.go:141] libmachine: (addons-823099) Creating KVM machine...
	I0923 23:38:23.152659   15521 main.go:141] libmachine: (addons-823099) DBG | found existing default KVM network
	I0923 23:38:23.153379   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.153219   15543 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0923 23:38:23.153400   15521 main.go:141] libmachine: (addons-823099) DBG | created network xml: 
	I0923 23:38:23.153412   15521 main.go:141] libmachine: (addons-823099) DBG | <network>
	I0923 23:38:23.153420   15521 main.go:141] libmachine: (addons-823099) DBG |   <name>mk-addons-823099</name>
	I0923 23:38:23.153428   15521 main.go:141] libmachine: (addons-823099) DBG |   <dns enable='no'/>
	I0923 23:38:23.153434   15521 main.go:141] libmachine: (addons-823099) DBG |   
	I0923 23:38:23.153445   15521 main.go:141] libmachine: (addons-823099) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 23:38:23.153455   15521 main.go:141] libmachine: (addons-823099) DBG |     <dhcp>
	I0923 23:38:23.153464   15521 main.go:141] libmachine: (addons-823099) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 23:38:23.153470   15521 main.go:141] libmachine: (addons-823099) DBG |     </dhcp>
	I0923 23:38:23.153485   15521 main.go:141] libmachine: (addons-823099) DBG |   </ip>
	I0923 23:38:23.153497   15521 main.go:141] libmachine: (addons-823099) DBG |   
	I0923 23:38:23.153527   15521 main.go:141] libmachine: (addons-823099) DBG | </network>
	I0923 23:38:23.153541   15521 main.go:141] libmachine: (addons-823099) DBG | 
	I0923 23:38:23.159364   15521 main.go:141] libmachine: (addons-823099) DBG | trying to create private KVM network mk-addons-823099 192.168.39.0/24...
	I0923 23:38:23.227848   15521 main.go:141] libmachine: (addons-823099) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 ...
	I0923 23:38:23.227898   15521 main.go:141] libmachine: (addons-823099) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0923 23:38:23.227909   15521 main.go:141] libmachine: (addons-823099) DBG | private KVM network mk-addons-823099 192.168.39.0/24 created
	I0923 23:38:23.227930   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.227792   15543 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:23.227962   15521 main.go:141] libmachine: (addons-823099) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0923 23:38:23.481605   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.481476   15543 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa...
	I0923 23:38:23.632238   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.632114   15543 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/addons-823099.rawdisk...
	I0923 23:38:23.632260   15521 main.go:141] libmachine: (addons-823099) DBG | Writing magic tar header
	I0923 23:38:23.632269   15521 main.go:141] libmachine: (addons-823099) DBG | Writing SSH key tar header
	I0923 23:38:23.632282   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.632226   15543 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 ...
	I0923 23:38:23.632439   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099
	I0923 23:38:23.632473   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 (perms=drwx------)
	I0923 23:38:23.632484   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0923 23:38:23.632491   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0923 23:38:23.632497   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:23.632507   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0923 23:38:23.632513   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 23:38:23.632518   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0923 23:38:23.632528   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0923 23:38:23.632536   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 23:38:23.632546   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 23:38:23.632550   15521 main.go:141] libmachine: (addons-823099) Creating domain...
	I0923 23:38:23.632558   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins
	I0923 23:38:23.632570   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home
	I0923 23:38:23.632578   15521 main.go:141] libmachine: (addons-823099) DBG | Skipping /home - not owner
	I0923 23:38:23.633510   15521 main.go:141] libmachine: (addons-823099) define libvirt domain using xml: 
	I0923 23:38:23.633532   15521 main.go:141] libmachine: (addons-823099) <domain type='kvm'>
	I0923 23:38:23.633543   15521 main.go:141] libmachine: (addons-823099)   <name>addons-823099</name>
	I0923 23:38:23.633550   15521 main.go:141] libmachine: (addons-823099)   <memory unit='MiB'>4000</memory>
	I0923 23:38:23.633564   15521 main.go:141] libmachine: (addons-823099)   <vcpu>2</vcpu>
	I0923 23:38:23.633572   15521 main.go:141] libmachine: (addons-823099)   <features>
	I0923 23:38:23.633596   15521 main.go:141] libmachine: (addons-823099)     <acpi/>
	I0923 23:38:23.633612   15521 main.go:141] libmachine: (addons-823099)     <apic/>
	I0923 23:38:23.633621   15521 main.go:141] libmachine: (addons-823099)     <pae/>
	I0923 23:38:23.633628   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.633638   15521 main.go:141] libmachine: (addons-823099)   </features>
	I0923 23:38:23.633646   15521 main.go:141] libmachine: (addons-823099)   <cpu mode='host-passthrough'>
	I0923 23:38:23.633653   15521 main.go:141] libmachine: (addons-823099)   
	I0923 23:38:23.633673   15521 main.go:141] libmachine: (addons-823099)   </cpu>
	I0923 23:38:23.633707   15521 main.go:141] libmachine: (addons-823099)   <os>
	I0923 23:38:23.633725   15521 main.go:141] libmachine: (addons-823099)     <type>hvm</type>
	I0923 23:38:23.633734   15521 main.go:141] libmachine: (addons-823099)     <boot dev='cdrom'/>
	I0923 23:38:23.633739   15521 main.go:141] libmachine: (addons-823099)     <boot dev='hd'/>
	I0923 23:38:23.633745   15521 main.go:141] libmachine: (addons-823099)     <bootmenu enable='no'/>
	I0923 23:38:23.633750   15521 main.go:141] libmachine: (addons-823099)   </os>
	I0923 23:38:23.633764   15521 main.go:141] libmachine: (addons-823099)   <devices>
	I0923 23:38:23.633771   15521 main.go:141] libmachine: (addons-823099)     <disk type='file' device='cdrom'>
	I0923 23:38:23.633779   15521 main.go:141] libmachine: (addons-823099)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/boot2docker.iso'/>
	I0923 23:38:23.633784   15521 main.go:141] libmachine: (addons-823099)       <target dev='hdc' bus='scsi'/>
	I0923 23:38:23.633791   15521 main.go:141] libmachine: (addons-823099)       <readonly/>
	I0923 23:38:23.633799   15521 main.go:141] libmachine: (addons-823099)     </disk>
	I0923 23:38:23.633811   15521 main.go:141] libmachine: (addons-823099)     <disk type='file' device='disk'>
	I0923 23:38:23.633821   15521 main.go:141] libmachine: (addons-823099)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 23:38:23.633829   15521 main.go:141] libmachine: (addons-823099)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/addons-823099.rawdisk'/>
	I0923 23:38:23.633836   15521 main.go:141] libmachine: (addons-823099)       <target dev='hda' bus='virtio'/>
	I0923 23:38:23.633841   15521 main.go:141] libmachine: (addons-823099)     </disk>
	I0923 23:38:23.633848   15521 main.go:141] libmachine: (addons-823099)     <interface type='network'>
	I0923 23:38:23.633854   15521 main.go:141] libmachine: (addons-823099)       <source network='mk-addons-823099'/>
	I0923 23:38:23.633860   15521 main.go:141] libmachine: (addons-823099)       <model type='virtio'/>
	I0923 23:38:23.633865   15521 main.go:141] libmachine: (addons-823099)     </interface>
	I0923 23:38:23.633870   15521 main.go:141] libmachine: (addons-823099)     <interface type='network'>
	I0923 23:38:23.633885   15521 main.go:141] libmachine: (addons-823099)       <source network='default'/>
	I0923 23:38:23.633892   15521 main.go:141] libmachine: (addons-823099)       <model type='virtio'/>
	I0923 23:38:23.633904   15521 main.go:141] libmachine: (addons-823099)     </interface>
	I0923 23:38:23.633919   15521 main.go:141] libmachine: (addons-823099)     <serial type='pty'>
	I0923 23:38:23.633928   15521 main.go:141] libmachine: (addons-823099)       <target port='0'/>
	I0923 23:38:23.633938   15521 main.go:141] libmachine: (addons-823099)     </serial>
	I0923 23:38:23.633945   15521 main.go:141] libmachine: (addons-823099)     <console type='pty'>
	I0923 23:38:23.633957   15521 main.go:141] libmachine: (addons-823099)       <target type='serial' port='0'/>
	I0923 23:38:23.633964   15521 main.go:141] libmachine: (addons-823099)     </console>
	I0923 23:38:23.633975   15521 main.go:141] libmachine: (addons-823099)     <rng model='virtio'>
	I0923 23:38:23.633986   15521 main.go:141] libmachine: (addons-823099)       <backend model='random'>/dev/random</backend>
	I0923 23:38:23.633996   15521 main.go:141] libmachine: (addons-823099)     </rng>
	I0923 23:38:23.634010   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.634040   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.634058   15521 main.go:141] libmachine: (addons-823099)   </devices>
	I0923 23:38:23.634064   15521 main.go:141] libmachine: (addons-823099) </domain>
	I0923 23:38:23.634068   15521 main.go:141] libmachine: (addons-823099) 
	I0923 23:38:23.640809   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:76:74:e7 in network default
	I0923 23:38:23.641513   15521 main.go:141] libmachine: (addons-823099) Ensuring networks are active...
	I0923 23:38:23.641533   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:23.642154   15521 main.go:141] libmachine: (addons-823099) Ensuring network default is active
	I0923 23:38:23.642583   15521 main.go:141] libmachine: (addons-823099) Ensuring network mk-addons-823099 is active
	I0923 23:38:23.643027   15521 main.go:141] libmachine: (addons-823099) Getting domain xml...
	I0923 23:38:23.643677   15521 main.go:141] libmachine: (addons-823099) Creating domain...
	I0923 23:38:25.091232   15521 main.go:141] libmachine: (addons-823099) Waiting to get IP...
	I0923 23:38:25.092030   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.092547   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.092567   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.092528   15543 retry.go:31] will retry after 241.454266ms: waiting for machine to come up
	I0923 23:38:25.337249   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.337719   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.337739   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.337668   15543 retry.go:31] will retry after 317.338732ms: waiting for machine to come up
	I0923 23:38:25.656076   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.656565   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.656591   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.656511   15543 retry.go:31] will retry after 326.274636ms: waiting for machine to come up
	I0923 23:38:25.984000   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.984436   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.984458   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.984397   15543 retry.go:31] will retry after 437.832088ms: waiting for machine to come up
	I0923 23:38:26.424106   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:26.424634   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:26.424656   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:26.424551   15543 retry.go:31] will retry after 668.976748ms: waiting for machine to come up
	I0923 23:38:27.095408   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:27.095943   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:27.095968   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:27.095910   15543 retry.go:31] will retry after 748.393255ms: waiting for machine to come up
	I0923 23:38:27.845915   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:27.846277   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:27.846348   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:27.846252   15543 retry.go:31] will retry after 761.156246ms: waiting for machine to come up
	I0923 23:38:28.608811   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:28.609268   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:28.609298   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:28.609221   15543 retry.go:31] will retry after 1.011775453s: waiting for machine to come up
	I0923 23:38:29.622384   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:29.622840   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:29.622873   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:29.622758   15543 retry.go:31] will retry after 1.842457552s: waiting for machine to come up
	I0923 23:38:31.467098   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:31.467569   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:31.467589   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:31.467500   15543 retry.go:31] will retry after 1.843110258s: waiting for machine to come up
	I0923 23:38:33.312780   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:33.313247   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:33.313274   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:33.313210   15543 retry.go:31] will retry after 1.888655031s: waiting for machine to come up
	I0923 23:38:35.204154   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:35.204555   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:35.204580   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:35.204514   15543 retry.go:31] will retry after 2.870740222s: waiting for machine to come up
	I0923 23:38:38.077027   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:38.077558   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:38.077587   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:38.077506   15543 retry.go:31] will retry after 3.119042526s: waiting for machine to come up
	I0923 23:38:41.200776   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:41.201175   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:41.201216   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:41.201127   15543 retry.go:31] will retry after 3.936049816s: waiting for machine to come up
	I0923 23:38:45.138385   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.138867   15521 main.go:141] libmachine: (addons-823099) Found IP for machine: 192.168.39.29
	I0923 23:38:45.138888   15521 main.go:141] libmachine: (addons-823099) Reserving static IP address...
	I0923 23:38:45.138902   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has current primary IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.139282   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find host DHCP lease matching {name: "addons-823099", mac: "52:54:00:15:a7:77", ip: "192.168.39.29"} in network mk-addons-823099
	I0923 23:38:45.213621   15521 main.go:141] libmachine: (addons-823099) Reserved static IP address: 192.168.39.29
	I0923 23:38:45.213668   15521 main.go:141] libmachine: (addons-823099) DBG | Getting to WaitForSSH function...
	I0923 23:38:45.213678   15521 main.go:141] libmachine: (addons-823099) Waiting for SSH to be available...
	I0923 23:38:45.215779   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.216179   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.216202   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.216401   15521 main.go:141] libmachine: (addons-823099) DBG | Using SSH client type: external
	I0923 23:38:45.216423   15521 main.go:141] libmachine: (addons-823099) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa (-rw-------)
	I0923 23:38:45.216459   15521 main.go:141] libmachine: (addons-823099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 23:38:45.216477   15521 main.go:141] libmachine: (addons-823099) DBG | About to run SSH command:
	I0923 23:38:45.216493   15521 main.go:141] libmachine: (addons-823099) DBG | exit 0
	I0923 23:38:45.348718   15521 main.go:141] libmachine: (addons-823099) DBG | SSH cmd err, output: <nil>: 
	I0923 23:38:45.349048   15521 main.go:141] libmachine: (addons-823099) KVM machine creation complete!
	I0923 23:38:45.349355   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:45.350006   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:45.350193   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:45.350362   15521 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 23:38:45.350380   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:38:45.351912   15521 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 23:38:45.351931   15521 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 23:38:45.351940   15521 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 23:38:45.351949   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.354650   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.355037   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.355057   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.355224   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.355434   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.355578   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.355729   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.355866   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.356038   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.356049   15521 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 23:38:45.463579   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:38:45.463613   15521 main.go:141] libmachine: Detecting the provisioner...
	I0923 23:38:45.463626   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.466205   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.466613   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.466660   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.466829   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.466991   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.467178   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.467465   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.467645   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.467822   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.467833   15521 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 23:38:45.576852   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 23:38:45.576941   15521 main.go:141] libmachine: found compatible host: buildroot
	I0923 23:38:45.576956   15521 main.go:141] libmachine: Provisioning with buildroot...
	I0923 23:38:45.576964   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.577226   15521 buildroot.go:166] provisioning hostname "addons-823099"
	I0923 23:38:45.577248   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.577399   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.579859   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.580371   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.580404   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.580552   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.580721   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.580878   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.581030   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.581194   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.581377   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.581388   15521 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-823099 && echo "addons-823099" | sudo tee /etc/hostname
	I0923 23:38:45.702788   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-823099
	
	I0923 23:38:45.702814   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.706046   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.706466   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.706498   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.706674   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.706841   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.706992   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.707098   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.707259   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.707426   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.707442   15521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-823099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-823099/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-823099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 23:38:45.824404   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:38:45.824467   15521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0923 23:38:45.824483   15521 buildroot.go:174] setting up certificates
	I0923 23:38:45.824492   15521 provision.go:84] configureAuth start
	I0923 23:38:45.824500   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.824784   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:45.827604   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.827981   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.828003   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.828166   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.830661   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.831054   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.831074   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.831227   15521 provision.go:143] copyHostCerts
	I0923 23:38:45.831320   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0923 23:38:45.831457   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0923 23:38:45.831538   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0923 23:38:45.831629   15521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.addons-823099 san=[127.0.0.1 192.168.39.29 addons-823099 localhost minikube]
	I0923 23:38:45.920692   15521 provision.go:177] copyRemoteCerts
	I0923 23:38:45.920769   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 23:38:45.920791   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.923583   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.923986   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.924002   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.924356   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.924566   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.924832   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.924985   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.010588   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 23:38:46.034096   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 23:38:46.056758   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 23:38:46.081040   15521 provision.go:87] duration metric: took 256.535012ms to configureAuth
	I0923 23:38:46.081074   15521 buildroot.go:189] setting minikube options for container-runtime
	I0923 23:38:46.081315   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:38:46.081416   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.084885   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.085669   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.085696   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.086110   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.086464   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.086680   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.086852   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.087064   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:46.087258   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:46.087278   15521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 23:38:46.317743   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 23:38:46.317769   15521 main.go:141] libmachine: Checking connection to Docker...
	I0923 23:38:46.317777   15521 main.go:141] libmachine: (addons-823099) Calling .GetURL
	I0923 23:38:46.319030   15521 main.go:141] libmachine: (addons-823099) DBG | Using libvirt version 6000000
	I0923 23:38:46.321409   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.321779   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.321804   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.321996   15521 main.go:141] libmachine: Docker is up and running!
	I0923 23:38:46.322104   15521 main.go:141] libmachine: Reticulating splines...
	I0923 23:38:46.322116   15521 client.go:171] duration metric: took 23.37599828s to LocalClient.Create
	I0923 23:38:46.322150   15521 start.go:167] duration metric: took 23.376076398s to libmachine.API.Create "addons-823099"
	I0923 23:38:46.322166   15521 start.go:293] postStartSetup for "addons-823099" (driver="kvm2")
	I0923 23:38:46.322180   15521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:38:46.322208   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.322508   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:38:46.322578   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.324896   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.325318   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.325337   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.325528   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.325723   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.325872   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.326059   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.410536   15521 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 23:38:46.414783   15521 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 23:38:46.414821   15521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0923 23:38:46.414912   15521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0923 23:38:46.414938   15521 start.go:296] duration metric: took 92.765547ms for postStartSetup
	I0923 23:38:46.414968   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:46.415530   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:46.418325   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.418685   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.418723   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.418908   15521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json ...
	I0923 23:38:46.419089   15521 start.go:128] duration metric: took 23.492942575s to createHost
	I0923 23:38:46.419111   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.421225   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.421516   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.421547   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.421645   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.421824   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.421967   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.422177   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.422321   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:46.422531   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:46.422544   15521 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 23:38:46.533050   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727134726.509696447
	
	I0923 23:38:46.533076   15521 fix.go:216] guest clock: 1727134726.509696447
	I0923 23:38:46.533086   15521 fix.go:229] Guest: 2024-09-23 23:38:46.509696447 +0000 UTC Remote: 2024-09-23 23:38:46.419100225 +0000 UTC m=+23.595027380 (delta=90.596222ms)
	I0923 23:38:46.533110   15521 fix.go:200] guest clock delta is within tolerance: 90.596222ms
	I0923 23:38:46.533117   15521 start.go:83] releasing machines lock for "addons-823099", held for 23.607112252s
	I0923 23:38:46.533143   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.533469   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:46.535967   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.536214   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.536242   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.536438   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.536933   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.537122   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.537236   15521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 23:38:46.537290   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.537326   15521 ssh_runner.go:195] Run: cat /version.json
	I0923 23:38:46.537344   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.540050   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540313   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540468   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.540495   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540659   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.540748   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.540775   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540846   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.540921   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.540970   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.541076   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.541111   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.541201   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.541342   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.662512   15521 ssh_runner.go:195] Run: systemctl --version
	I0923 23:38:46.668932   15521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 23:38:46.827889   15521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 23:38:46.833604   15521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 23:38:46.833746   15521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:38:46.850062   15521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 23:38:46.850089   15521 start.go:495] detecting cgroup driver to use...
	I0923 23:38:46.850148   15521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 23:38:46.867425   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 23:38:46.882361   15521 docker.go:217] disabling cri-docker service (if available) ...
	I0923 23:38:46.882419   15521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 23:38:46.897323   15521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 23:38:46.911805   15521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 23:38:47.036999   15521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 23:38:47.203688   15521 docker.go:233] disabling docker service ...
	I0923 23:38:47.203767   15521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 23:38:47.219064   15521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 23:38:47.231715   15521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 23:38:47.365365   15521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 23:38:47.495284   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 23:38:47.508723   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:38:47.526801   15521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 23:38:47.526867   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.536943   15521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 23:38:47.537001   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.547198   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.557182   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.567529   15521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:38:47.578959   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.589877   15521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.608254   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.618495   15521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:38:47.627787   15521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 23:38:47.627862   15521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 23:38:47.640795   15521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:38:47.650160   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:47.773450   15521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 23:38:47.870212   15521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 23:38:47.870328   15521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 23:38:47.875329   15521 start.go:563] Will wait 60s for crictl version
	I0923 23:38:47.875422   15521 ssh_runner.go:195] Run: which crictl
	I0923 23:38:47.879286   15521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 23:38:47.916386   15521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 23:38:47.916536   15521 ssh_runner.go:195] Run: crio --version
	I0923 23:38:47.943232   15521 ssh_runner.go:195] Run: crio --version
	I0923 23:38:47.973111   15521 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 23:38:47.974418   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:47.977389   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:47.977726   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:47.977771   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:47.977950   15521 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 23:38:47.982681   15521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:47.995735   15521 kubeadm.go:883] updating cluster {Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:38:47.995872   15521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:47.995937   15521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:38:48.026187   15521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 23:38:48.026255   15521 ssh_runner.go:195] Run: which lz4
	I0923 23:38:48.029934   15521 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 23:38:48.033681   15521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 23:38:48.033709   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 23:38:49.244831   15521 crio.go:462] duration metric: took 1.21491674s to copy over tarball
	I0923 23:38:49.244910   15521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 23:38:51.408420   15521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.163482226s)
	I0923 23:38:51.408450   15521 crio.go:469] duration metric: took 2.163580195s to extract the tarball
	I0923 23:38:51.408457   15521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 23:38:51.445104   15521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:38:51.484376   15521 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 23:38:51.484401   15521 cache_images.go:84] Images are preloaded, skipping loading
	I0923 23:38:51.484409   15521 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.31.1 crio true true} ...
	I0923 23:38:51.484499   15521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-823099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 23:38:51.484557   15521 ssh_runner.go:195] Run: crio config
	I0923 23:38:51.538806   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:38:51.538828   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:38:51.538838   15521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:38:51.538859   15521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-823099 NodeName:addons-823099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:38:51.538985   15521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-823099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:38:51.539038   15521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:38:51.548496   15521 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 23:38:51.548563   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 23:38:51.557551   15521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 23:38:51.574810   15521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:38:51.590461   15521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0923 23:38:51.605904   15521 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0923 23:38:51.609379   15521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:51.620067   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:51.746991   15521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:38:51.764430   15521 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099 for IP: 192.168.39.29
	I0923 23:38:51.764452   15521 certs.go:194] generating shared ca certs ...
	I0923 23:38:51.764479   15521 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.764627   15521 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0923 23:38:51.827925   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt ...
	I0923 23:38:51.827961   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt: {Name:mk7bce46408bad28fa4c4ad82afe9d6bd10e26b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.828169   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key ...
	I0923 23:38:51.828185   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key: {Name:mkfd724d8b1e5c4e28f581332eb148d4cdbcd3bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.828303   15521 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0923 23:38:51.937978   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt ...
	I0923 23:38:51.938011   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt: {Name:mka59daefa132c631d082c68c6d4bee6c31dbed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.938201   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key ...
	I0923 23:38:51.938214   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key: {Name:mk74fd28ca9ebe05bacfd634b928864a1a7ce292 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.938314   15521 certs.go:256] generating profile certs ...
	I0923 23:38:51.938367   15521 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key
	I0923 23:38:51.938381   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt with IP's: []
	I0923 23:38:52.195361   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt ...
	I0923 23:38:52.195393   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: {Name:mkf53b392cc89a16e12244564032d9b45154080d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.195578   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key ...
	I0923 23:38:52.195591   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key: {Name:mk9b41db6a73a405e689e669580e343c2766a447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.195711   15521 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9
	I0923 23:38:52.195731   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.29]
	I0923 23:38:52.295200   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 ...
	I0923 23:38:52.295231   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9: {Name:mkae17567f7ac3bcae8f339aebdd9969213784de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.295413   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9 ...
	I0923 23:38:52.295433   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9: {Name:mk496cd6f593f9c72852d6a78b567d84d704b066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.295528   15521 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt
	I0923 23:38:52.295617   15521 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key
	I0923 23:38:52.295677   15521 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key
	I0923 23:38:52.295695   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt with IP's: []
	I0923 23:38:52.353357   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt ...
	I0923 23:38:52.353388   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt: {Name:mke38bbbfeef7cd2c66dad6779df3ba32d8b0e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.353569   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key ...
	I0923 23:38:52.353582   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key: {Name:mka62603d541b89ee9d7c4fc26d23c4522e47be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.353765   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 23:38:52.353806   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0923 23:38:52.353833   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:38:52.353855   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0923 23:38:52.354427   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:38:52.379337   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 23:38:52.400882   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:38:52.424525   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 23:38:52.450323   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 23:38:52.477687   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 23:38:52.499751   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:38:52.521727   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 23:38:52.543557   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:38:52.565278   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:38:52.581109   15521 ssh_runner.go:195] Run: openssl version
	I0923 23:38:52.586569   15521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:38:52.596572   15521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.600599   15521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.600654   15521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.606001   15521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:38:52.615760   15521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:38:52.619451   15521 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:38:52.619508   15521 kubeadm.go:392] StartCluster: {Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:52.619583   15521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 23:38:52.620006   15521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 23:38:52.654320   15521 cri.go:89] found id: ""
	I0923 23:38:52.654386   15521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:38:52.663817   15521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:38:52.673074   15521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:38:52.681948   15521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:38:52.681974   15521 kubeadm.go:157] found existing configuration files:
	
	I0923 23:38:52.682026   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:38:52.690360   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:38:52.690418   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:38:52.698969   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:38:52.707269   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:38:52.707357   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:38:52.716380   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:38:52.725235   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:38:52.725319   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:38:52.734575   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:38:52.743504   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:38:52.743572   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:38:52.752994   15521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 23:38:52.803786   15521 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:38:52.803907   15521 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:38:52.902853   15521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:38:52.903001   15521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:38:52.903126   15521 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:38:52.909824   15521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:38:52.911676   15521 out.go:235]   - Generating certificates and keys ...
	I0923 23:38:52.912753   15521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:38:52.912873   15521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:38:53.248886   15521 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:38:53.341826   15521 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:38:53.485454   15521 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:38:53.623967   15521 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:38:53.679532   15521 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:38:53.679721   15521 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-823099 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0923 23:38:53.905840   15521 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 23:38:53.906024   15521 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-823099 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0923 23:38:54.051813   15521 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 23:38:54.395310   15521 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 23:38:54.735052   15521 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 23:38:54.735299   15521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 23:38:54.847419   15521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 23:38:54.936586   15521 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 23:38:55.060632   15521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 23:38:55.214060   15521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 23:38:55.303678   15521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 23:38:55.304286   15521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 23:38:55.306790   15521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 23:38:55.308801   15521 out.go:235]   - Booting up control plane ...
	I0923 23:38:55.308940   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 23:38:55.309057   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 23:38:55.309138   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 23:38:55.324842   15521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 23:38:55.330701   15521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 23:38:55.330768   15521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 23:38:55.470043   15521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 23:38:55.470158   15521 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 23:38:56.470778   15521 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001582152s
	I0923 23:38:56.470872   15521 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 23:39:01.969265   15521 kubeadm.go:310] [api-check] The API server is healthy after 5.501475075s
	I0923 23:39:01.981867   15521 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 23:39:02.004452   15521 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 23:39:02.039983   15521 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 23:39:02.040235   15521 kubeadm.go:310] [mark-control-plane] Marking the node addons-823099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 23:39:02.057479   15521 kubeadm.go:310] [bootstrap-token] Using token: fyz7kl.eyjwn42xmcr354pj
	I0923 23:39:02.059006   15521 out.go:235]   - Configuring RBAC rules ...
	I0923 23:39:02.059157   15521 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 23:39:02.076960   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 23:39:02.086257   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 23:39:02.092000   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 23:39:02.096548   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 23:39:02.102638   15521 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 23:39:02.377281   15521 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 23:39:02.807346   15521 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 23:39:03.376529   15521 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 23:39:03.377848   15521 kubeadm.go:310] 
	I0923 23:39:03.377926   15521 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 23:39:03.377937   15521 kubeadm.go:310] 
	I0923 23:39:03.378021   15521 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 23:39:03.378030   15521 kubeadm.go:310] 
	I0923 23:39:03.378058   15521 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 23:39:03.378126   15521 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 23:39:03.378208   15521 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 23:39:03.378228   15521 kubeadm.go:310] 
	I0923 23:39:03.378321   15521 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 23:39:03.378330   15521 kubeadm.go:310] 
	I0923 23:39:03.378390   15521 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 23:39:03.378400   15521 kubeadm.go:310] 
	I0923 23:39:03.378499   15521 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 23:39:03.378600   15521 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 23:39:03.378669   15521 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 23:39:03.378680   15521 kubeadm.go:310] 
	I0923 23:39:03.378788   15521 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 23:39:03.378897   15521 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 23:39:03.378907   15521 kubeadm.go:310] 
	I0923 23:39:03.378995   15521 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fyz7kl.eyjwn42xmcr354pj \
	I0923 23:39:03.379107   15521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0923 23:39:03.379129   15521 kubeadm.go:310] 	--control-plane 
	I0923 23:39:03.379133   15521 kubeadm.go:310] 
	I0923 23:39:03.379245   15521 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 23:39:03.379266   15521 kubeadm.go:310] 
	I0923 23:39:03.379389   15521 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fyz7kl.eyjwn42xmcr354pj \
	I0923 23:39:03.379523   15521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0923 23:39:03.380043   15521 kubeadm.go:310] W0923 23:38:52.785015     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:39:03.380394   15521 kubeadm.go:310] W0923 23:38:52.785716     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:39:03.380489   15521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 23:39:03.380508   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:39:03.380560   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:39:03.383452   15521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 23:39:03.384682   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 23:39:03.397094   15521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 23:39:03.417722   15521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 23:39:03.417811   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:03.417847   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-823099 minikube.k8s.io/updated_at=2024_09_23T23_39_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=addons-823099 minikube.k8s.io/primary=true
	I0923 23:39:03.459069   15521 ops.go:34] apiserver oom_adj: -16
	I0923 23:39:03.574741   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.075852   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.575549   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.075536   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.574791   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.075455   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.575226   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.075498   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.575490   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.682573   15521 kubeadm.go:1113] duration metric: took 4.264822927s to wait for elevateKubeSystemPrivileges
	I0923 23:39:07.682604   15521 kubeadm.go:394] duration metric: took 15.063102314s to StartCluster
	I0923 23:39:07.682621   15521 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:07.682743   15521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:39:07.683441   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:07.683700   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 23:39:07.683729   15521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:39:07.683777   15521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 23:39:07.683896   15521 addons.go:69] Setting yakd=true in profile "addons-823099"
	I0923 23:39:07.683906   15521 addons.go:69] Setting default-storageclass=true in profile "addons-823099"
	I0923 23:39:07.683910   15521 addons.go:69] Setting cloud-spanner=true in profile "addons-823099"
	I0923 23:39:07.683926   15521 addons.go:69] Setting registry=true in profile "addons-823099"
	I0923 23:39:07.683932   15521 addons.go:234] Setting addon cloud-spanner=true in "addons-823099"
	I0923 23:39:07.683939   15521 addons.go:234] Setting addon registry=true in "addons-823099"
	I0923 23:39:07.683937   15521 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-823099"
	I0923 23:39:07.683936   15521 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-823099"
	I0923 23:39:07.683953   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:39:07.683968   15521 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-823099"
	I0923 23:39:07.683981   15521 addons.go:69] Setting storage-provisioner=true in profile "addons-823099"
	I0923 23:39:07.683982   15521 addons.go:69] Setting ingress=true in profile "addons-823099"
	I0923 23:39:07.683983   15521 addons.go:69] Setting gcp-auth=true in profile "addons-823099"
	I0923 23:39:07.683992   15521 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-823099"
	I0923 23:39:07.684000   15521 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-823099"
	I0923 23:39:07.684003   15521 addons.go:69] Setting inspektor-gadget=true in profile "addons-823099"
	I0923 23:39:07.684006   15521 addons.go:69] Setting volcano=true in profile "addons-823099"
	I0923 23:39:07.684009   15521 mustload.go:65] Loading cluster: addons-823099
	I0923 23:39:07.684014   15521 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-823099"
	I0923 23:39:07.683928   15521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-823099"
	I0923 23:39:07.683992   15521 addons.go:234] Setting addon storage-provisioner=true in "addons-823099"
	I0923 23:39:07.684124   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684015   15521 addons.go:234] Setting addon inspektor-gadget=true in "addons-823099"
	I0923 23:39:07.684199   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684214   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:39:07.683970   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684535   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684572   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684595   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684622   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684005   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.683959   15521 addons.go:69] Setting ingress-dns=true in profile "addons-823099"
	I0923 23:39:07.684654   15521 addons.go:234] Setting addon ingress-dns=true in "addons-823099"
	I0923 23:39:07.684657   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684690   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684716   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684747   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.683970   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.683918   15521 addons.go:234] Setting addon yakd=true in "addons-823099"
	I0923 23:39:07.684807   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.685044   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685062   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685073   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685093   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685134   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.683995   15521 addons.go:234] Setting addon ingress=true in "addons-823099"
	I0923 23:39:07.685161   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685181   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684602   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684017   15521 addons.go:69] Setting metrics-server=true in profile "addons-823099"
	I0923 23:39:07.685232   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685242   15521 addons.go:234] Setting addon metrics-server=true in "addons-823099"
	I0923 23:39:07.685264   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684024   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.685615   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685642   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685796   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685854   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685977   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.686016   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684027   15521 addons.go:69] Setting volumesnapshots=true in profile "addons-823099"
	I0923 23:39:07.686371   15521 addons.go:234] Setting addon volumesnapshots=true in "addons-823099"
	I0923 23:39:07.686398   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684018   15521 addons.go:234] Setting addon volcano=true in "addons-823099"
	I0923 23:39:07.686673   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.686498   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.686778   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684634   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.687024   15521 out.go:177] * Verifying Kubernetes components...
	I0923 23:39:07.688506   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:39:07.703440   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0923 23:39:07.705810   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0923 23:39:07.708733   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.708779   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.709090   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0923 23:39:07.709229   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.709266   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.709595   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.709629   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.713224   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713355   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713390   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713862   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.713881   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.714302   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.714377   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.714392   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.714451   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.714464   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.715015   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.715037   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.715432   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.715475   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.715787   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.716507   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.719153   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.719542   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.719578   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.720949   15521 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-823099"
	I0923 23:39:07.720998   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.721386   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.721432   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.735627   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0923 23:39:07.736277   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.736638   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0923 23:39:07.737105   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.737122   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.737510   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.738081   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.738098   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.738156   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0923 23:39:07.739268   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.739318   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.739918   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.739959   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.740211   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.740321   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45651
	I0923 23:39:07.740861   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.740881   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.740953   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.740993   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.741352   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.741901   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.741947   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.742154   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.742613   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.742628   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.743023   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.743085   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0923 23:39:07.743569   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.743610   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.746643   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.747874   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.747903   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.748324   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.748466   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0923 23:39:07.748965   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.749004   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.749096   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.749726   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.749746   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.750196   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.750719   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.750754   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.758701   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0923 23:39:07.759243   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.759784   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.759805   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.760206   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.760261   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0923 23:39:07.761129   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.761175   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.761441   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33217
	I0923 23:39:07.761985   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.762828   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.762847   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.763324   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.763665   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.765500   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0923 23:39:07.765573   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.766125   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.766145   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.766801   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.766864   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0923 23:39:07.767084   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.767500   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.768285   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.768301   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.768446   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.768843   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.768866   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.768932   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.769275   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.769821   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.769867   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.771258   15521 addons.go:234] Setting addon default-storageclass=true in "addons-823099"
	I0923 23:39:07.771300   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.771655   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.771687   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.771922   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0923 23:39:07.772228   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.772255   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.774448   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0923 23:39:07.780977   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.781565   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.781590   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.781920   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.782058   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.783913   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.785056   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0923 23:39:07.785575   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0923 23:39:07.785629   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.786110   15521 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 23:39:07.786146   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.786320   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.786334   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.786772   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.787011   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.788584   15521 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 23:39:07.789007   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.789550   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.789568   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.789756   15521 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 23:39:07.789773   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 23:39:07.789788   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.790146   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.790662   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:07.791941   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:07.793680   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.793727   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.793998   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0923 23:39:07.794008   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.794031   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0923 23:39:07.794471   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.794493   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.794701   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.794875   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.794877   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 23:39:07.794982   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.795069   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.796623   15521 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:39:07.796643   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 23:39:07.796662   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.798956   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I0923 23:39:07.799731   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.800110   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.800142   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.800477   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.800553   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0923 23:39:07.801546   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801641   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801654   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801712   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.801839   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.801899   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.802076   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802095   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802220   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802235   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802360   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802376   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802425   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802551   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.802641   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38501
	I0923 23:39:07.802785   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802803   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.802788   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802965   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.803026   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.803767   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.803787   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.803854   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.804090   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.804743   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.804784   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.805129   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.805147   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.805173   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.805248   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.805498   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.805769   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.806098   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.806118   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.806136   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.806514   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.807108   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.806545   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.806630   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45039
	I0923 23:39:07.807369   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.807510   15521 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 23:39:07.808434   15521 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 23:39:07.808504   15521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 23:39:07.809038   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.809300   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:07.809332   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:07.809348   15521 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:39:07.809359   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 23:39:07.809376   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.809986   15521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:07.810003   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 23:39:07.810016   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.810062   15521 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 23:39:07.810069   15521 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 23:39:07.810078   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.811006   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:07.811042   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:07.811050   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:07.811145   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:07.811156   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:07.811952   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.812979   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.812997   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.813331   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:07.813347   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 23:39:07.813447   15521 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 23:39:07.813946   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.814117   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0923 23:39:07.814661   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.814885   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.815227   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.815248   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.815430   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.815545   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.815727   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.816076   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.816315   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.816316   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.817096   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.817135   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.817285   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.817306   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.817432   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.817458   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.817467   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.817640   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.817797   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.818443   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.818475   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.818854   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.818916   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.818935   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.819103   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.819327   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.819449   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.819556   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.819704   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.820144   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0923 23:39:07.821232   15521 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 23:39:07.822387   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 23:39:07.822407   15521 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 23:39:07.822426   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.823519   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.824593   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.824617   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.825182   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.825425   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.826173   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.826824   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.826852   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.827033   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.827202   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.827342   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.827473   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.833317   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.834549   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0923 23:39:07.834702   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I0923 23:39:07.835036   15521 out.go:177]   - Using image docker.io/busybox:stable
	I0923 23:39:07.835302   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.835304   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.835362   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0923 23:39:07.835997   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.836020   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.836421   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.836527   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.836860   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.837187   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.837204   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.837294   15521 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 23:39:07.837615   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0923 23:39:07.837726   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.838168   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.838186   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.838238   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.838278   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.838430   15521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:07.838454   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 23:39:07.838486   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.838837   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.838942   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.838956   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.839318   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.839611   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.840065   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.840126   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.840224   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.840573   15521 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 23:39:07.841432   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.841867   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 23:39:07.841976   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 23:39:07.841989   15521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 23:39:07.842007   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.843249   15521 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 23:39:07.843258   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 23:39:07.843274   15521 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 23:39:07.843293   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.843539   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.844019   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.844044   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.844276   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.844626   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.844835   15521 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:07.844851   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 23:39:07.844867   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.844970   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.845115   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.845546   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.847193   15521 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 23:39:07.847689   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848226   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848362   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.848385   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848554   15521 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:07.848568   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 23:39:07.848584   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.849188   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849248   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849270   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.849286   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849314   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849370   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.849384   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849407   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849452   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849490   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.849597   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849640   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849646   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.849718   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.849850   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.850150   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.850314   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.851886   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.852209   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.852227   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.852511   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.852685   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.852836   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.852856   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I0923 23:39:07.853005   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.853307   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.853831   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.853845   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.854157   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.854337   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.854969   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0923 23:39:07.855341   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.855829   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.855846   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.855913   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.856203   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.856410   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.857704   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 23:39:07.857995   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.858210   15521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:07.858230   15521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 23:39:07.858247   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.859971   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 23:39:07.860879   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.861260   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.861284   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.861453   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.861596   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.861697   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.861858   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.862305   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 23:39:07.863581   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 23:39:07.864972   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 23:39:07.866055   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 23:39:07.867358   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 23:39:07.868993   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 23:39:07.870321   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 23:39:07.870349   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 23:39:07.870377   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.873724   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.874117   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.874148   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.874293   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.874468   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.874636   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.874743   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	W0923 23:39:07.876787   15521 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52054->192.168.39.29:22: read: connection reset by peer
	I0923 23:39:07.876819   15521 retry.go:31] will retry after 325.765673ms: ssh: handshake failed: read tcp 192.168.39.1:52054->192.168.39.29:22: read: connection reset by peer
	I0923 23:39:08.112607   15521 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 23:39:08.112629   15521 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 23:39:08.174341   15521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:39:08.174422   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 23:39:08.189231   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:08.223406   15521 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:08.223436   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 23:39:08.238226   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 23:39:08.238253   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 23:39:08.286222   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:08.286427   15521 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 23:39:08.286456   15521 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 23:39:08.293938   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:08.304026   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:39:08.304633   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:39:08.357785   15521 node_ready.go:35] waiting up to 6m0s for node "addons-823099" to be "Ready" ...
	I0923 23:39:08.361610   15521 node_ready.go:49] node "addons-823099" has status "Ready":"True"
	I0923 23:39:08.361634   15521 node_ready.go:38] duration metric: took 3.816238ms for node "addons-823099" to be "Ready" ...
	I0923 23:39:08.361643   15521 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:08.370384   15521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:08.389666   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 23:39:08.389694   15521 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 23:39:08.393171   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 23:39:08.393188   15521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 23:39:08.414092   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:08.415846   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:08.424751   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:08.462715   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 23:39:08.462737   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 23:39:08.507754   15521 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 23:39:08.507783   15521 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 23:39:08.593622   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:08.593654   15521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 23:39:08.629405   15521 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 23:39:08.629437   15521 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 23:39:08.632087   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 23:39:08.632113   15521 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 23:39:08.661201   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 23:39:08.661224   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 23:39:08.691801   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 23:39:08.691827   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 23:39:08.714253   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:08.819060   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 23:39:08.819096   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 23:39:08.831081   15521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 23:39:08.831110   15521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 23:39:08.886522   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 23:39:08.886559   15521 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 23:39:09.009250   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 23:39:09.009293   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 23:39:09.046881   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 23:39:09.046906   15521 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 23:39:09.157084   15521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 23:39:09.157109   15521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 23:39:09.166062   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:09.166097   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 23:39:09.267085   15521 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:09.267116   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 23:39:09.292567   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 23:39:09.292607   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 23:39:09.429637   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:09.445286   15521 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 23:39:09.445326   15521 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 23:39:09.492474   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 23:39:09.492516   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 23:39:09.565613   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:09.721455   15521 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:09.721493   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 23:39:09.840988   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:09.948899   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 23:39:09.948926   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 23:39:10.140834   15521 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.966375459s)
	I0923 23:39:10.140875   15521 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 23:39:10.141396   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.952129655s)
	I0923 23:39:10.141443   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:10.142827   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:10.143945   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:10.143972   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:10.143992   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:10.144008   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:10.144020   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:10.144388   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:10.144424   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:10.144431   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:10.281273   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 23:39:10.281305   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 23:39:10.378453   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:10.646247   15521 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-823099" context rescaled to 1 replicas
	I0923 23:39:10.659756   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 23:39:10.659783   15521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 23:39:10.917202   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 23:39:10.917226   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 23:39:11.091159   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 23:39:11.091181   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 23:39:11.170283   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:11.170310   15521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 23:39:11.230097   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:12.257220   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.970955837s)
	I0923 23:39:12.257279   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.257296   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.257605   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.257667   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.257688   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.257702   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.257712   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.257950   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.257978   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.257992   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.474315   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:12.579345   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.28537252s)
	I0923 23:39:12.579401   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579415   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579416   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.275358792s)
	I0923 23:39:12.579452   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579468   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579812   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.579813   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.579872   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.579881   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579827   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.579909   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.579927   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579941   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579841   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.579889   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.580178   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.580190   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.580247   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.580256   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.580271   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.695053   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.695077   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.695384   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.695434   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.695455   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:14.801414   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 23:39:14.801451   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:14.804720   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:14.805099   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:14.805139   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:14.805316   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:14.805553   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:14.805707   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:14.805897   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:14.982300   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:15.080173   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.775510965s)
	I0923 23:39:15.080238   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080251   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080246   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.666114058s)
	I0923 23:39:15.080267   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.664395261s)
	I0923 23:39:15.080284   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080302   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080304   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080351   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.655576289s)
	I0923 23:39:15.080364   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080367   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080450   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080463   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.366181047s)
	I0923 23:39:15.080486   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080496   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080565   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.650891016s)
	I0923 23:39:15.080647   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080661   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082553   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082564   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082580   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082585   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082594   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082604   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082584   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082626   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082636   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082647   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082655   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082668   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082674   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082611   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082691   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082687   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082680   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082659   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082718   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082722   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082726   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082731   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082708   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082741   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082749   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082757   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082764   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082766   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082735   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082783   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.083277   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083295   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083312   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083337   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083354   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083406   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083372   15521 addons.go:475] Verifying addon ingress=true in "addons-823099"
	I0923 23:39:15.083518   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083528   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083746   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083771   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083777   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084317   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.084354   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.084376   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.084382   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084390   15521 addons.go:475] Verifying addon metrics-server=true in "addons-823099"
	I0923 23:39:15.084467   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.084473   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084479   15521 addons.go:475] Verifying addon registry=true in "addons-823099"
	I0923 23:39:15.085995   15521 out.go:177] * Verifying registry addon...
	I0923 23:39:15.086007   15521 out.go:177] * Verifying ingress addon...
	I0923 23:39:15.085999   15521 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-823099 service yakd-dashboard -n yakd-dashboard
	
	I0923 23:39:15.088530   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 23:39:15.088530   15521 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 23:39:15.123892   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 23:39:15.148951   15521 addons.go:234] Setting addon gcp-auth=true in "addons-823099"
	I0923 23:39:15.149022   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:15.149444   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:15.149498   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:15.156748   15521 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 23:39:15.156776   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.156871   15521 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 23:39:15.156894   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.165454   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0923 23:39:15.166065   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:15.166623   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:15.166651   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:15.167013   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:15.167737   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:15.167785   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:15.183598   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0923 23:39:15.184008   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:15.184531   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:15.184550   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:15.184913   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:15.185133   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:15.186845   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:15.187076   15521 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 23:39:15.187097   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:15.190490   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:15.190909   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:15.190948   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:15.191144   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:15.191345   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:15.191625   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:15.191841   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:15.290771   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.290792   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.291156   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.291204   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.291213   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.608008   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.608181   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.663866   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.098204567s)
	W0923 23:39:15.663915   15521 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:15.663942   15521 retry.go:31] will retry after 155.263016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:15.663943   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.822915237s)
	I0923 23:39:15.663986   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.663996   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.664271   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.664295   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.664306   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.664280   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.664315   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.664608   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.664630   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.820233   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:16.092842   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.094282   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:16.598768   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.599105   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.384250   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.386825   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.406922   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:17.409629   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.179468555s)
	I0923 23:39:17.409649   15521 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.222551947s)
	I0923 23:39:17.409675   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:17.409696   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:17.410005   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:17.410058   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:17.410074   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:17.410089   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:17.410101   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:17.410329   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:17.410346   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:17.410355   15521 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-823099"
	I0923 23:39:17.410358   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:17.411136   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:17.412024   15521 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 23:39:17.413560   15521 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 23:39:17.414261   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 23:39:17.414746   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 23:39:17.414766   15521 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 23:39:17.482533   15521 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 23:39:17.482556   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:17.512131   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 23:39:17.512159   15521 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 23:39:17.604150   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.604278   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.608747   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:17.608767   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 23:39:17.684509   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:17.918552   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.093404   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.096529   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.238589   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.418299415s)
	I0923 23:39:18.238642   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.238659   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.238975   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.238997   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.239004   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.239015   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.239024   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.239271   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.239324   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.239340   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.418978   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.601947   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.602098   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.821107   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.136556988s)
	I0923 23:39:18.821156   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.821172   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.821448   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.821469   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.821483   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.821490   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.821766   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.821781   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.821801   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.823766   15521 addons.go:475] Verifying addon gcp-auth=true in "addons-823099"
	I0923 23:39:18.825653   15521 out.go:177] * Verifying gcp-auth addon...
	I0923 23:39:18.828295   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 23:39:18.850143   15521 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 23:39:18.850163   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:18.920541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.100926   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.107040   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.336759   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:19.421467   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.593866   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.594253   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.832242   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:19.878336   15521 pod_ready.go:98] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.29 HostIPs:[{IP:192.168.39.
29}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:39:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:39:12 +0000 UTC,FinishedAt:2024-09-23 23:39:18 +0000 UTC,ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327 Started:0xc00232d080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001f318d0} {Name:kube-api-access-ph5fc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001f318e0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:19.878379   15521 pod_ready.go:82] duration metric: took 11.507967304s for pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace to be "Ready" ...
	E0923 23:39:19.878394   15521 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.29 HostIPs:[{IP:192.168.39.29}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:39:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:39:12 +0000 UTC,FinishedAt:2024-09-23 23:39:18 +0000 UTC,ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327 Started:0xc00232d080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001f318d0} {Name:kube-api-access-ph5fc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc001f318e0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:19.878408   15521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.884151   15521 pod_ready.go:93] pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.884174   15521 pod_ready.go:82] duration metric: took 5.758861ms for pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.884183   15521 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.891508   15521 pod_ready.go:93] pod "etcd-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.891551   15521 pod_ready.go:82] duration metric: took 7.346453ms for pod "etcd-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.891564   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.896566   15521 pod_ready.go:93] pod "kube-apiserver-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.896593   15521 pod_ready.go:82] duration metric: took 5.020816ms for pod "kube-apiserver-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.896609   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.912376   15521 pod_ready.go:93] pod "kube-controller-manager-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.912404   15521 pod_ready.go:82] duration metric: took 15.786797ms for pod "kube-controller-manager-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.912416   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgclm" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.923485   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.095418   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.098684   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.275218   15521 pod_ready.go:93] pod "kube-proxy-pgclm" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:20.275250   15521 pod_ready.go:82] duration metric: took 362.825273ms for pod "kube-proxy-pgclm" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.275263   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.332146   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:20.419880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.593710   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.593992   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.675652   15521 pod_ready.go:93] pod "kube-scheduler-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:20.675690   15521 pod_ready.go:82] duration metric: took 400.417501ms for pod "kube-scheduler-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.675704   15521 pod_ready.go:39] duration metric: took 12.314050106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:20.675723   15521 api_server.go:52] waiting for apiserver process to appear ...
	I0923 23:39:20.675791   15521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:20.719710   15521 api_server.go:72] duration metric: took 13.035944288s to wait for apiserver process to appear ...
	I0923 23:39:20.719738   15521 api_server.go:88] waiting for apiserver healthz status ...
	I0923 23:39:20.719761   15521 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0923 23:39:20.724996   15521 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0923 23:39:20.726609   15521 api_server.go:141] control plane version: v1.31.1
	I0923 23:39:20.726632   15521 api_server.go:131] duration metric: took 6.887893ms to wait for apiserver health ...
	I0923 23:39:20.726640   15521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 23:39:20.832687   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:20.879847   15521 system_pods.go:59] 17 kube-system pods found
	I0923 23:39:20.879881   15521 system_pods.go:61] "coredns-7c65d6cfc9-h4m6q" [e5a66fda-ace2-434e-82fb-3d9d66fac49f] Running
	I0923 23:39:20.879892   15521 system_pods.go:61] "csi-hostpath-attacher-0" [ad0efe3a-8c72-46db-9ed8-35a46fba41f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:20.879897   15521 system_pods.go:61] "csi-hostpath-resizer-0" [e357dfe7-127b-4f18-90e3-beb7846c05cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:20.879906   15521 system_pods.go:61] "csi-hostpathplugin-l4gsf" [de45bd42-06e1-4387-ba3f-4d6a477b4823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:20.879911   15521 system_pods.go:61] "etcd-addons-823099" [c9add526-f518-4303-b016-3f95bd8c222a] Running
	I0923 23:39:20.879914   15521 system_pods.go:61] "kube-apiserver-addons-823099" [8788c6f4-114f-4c6c-928b-8ca58300c969] Running
	I0923 23:39:20.879918   15521 system_pods.go:61] "kube-controller-manager-addons-823099" [726e0154-67e9-4c92-9bac-b577104b0d12] Running
	I0923 23:39:20.879923   15521 system_pods.go:61] "kube-ingress-dns-minikube" [1194cadb-80b1-4fad-b99a-0afbc0be0b40] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 23:39:20.879926   15521 system_pods.go:61] "kube-proxy-pgclm" [3d47a25a-ab05-4197-975a-88bb7e1f9834] Running
	I0923 23:39:20.879929   15521 system_pods.go:61] "kube-scheduler-addons-823099" [193d28ff-87b2-4578-903c-e74dcea5c006] Running
	I0923 23:39:20.879939   15521 system_pods.go:61] "metrics-server-84c5f94fbc-gpzsm" [d5937c63-7f30-477a-a36e-e7e6cb8c64e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:20.879951   15521 system_pods.go:61] "nvidia-device-plugin-daemonset-2dqft" [c5e363a8-697b-4396-acf2-c41232b01445] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 23:39:20.879957   15521 system_pods.go:61] "registry-66c9cd494c-h5ntb" [67fc5fdd-03ae-44c9-8e43-0042bd142349] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:20.879964   15521 system_pods.go:61] "registry-proxy-dc579" [76bec57d-6868-4098-a291-8c38dda98afc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:20.879969   15521 system_pods.go:61] "snapshot-controller-56fcc65765-2lpn2" [6ea26c65-7a9a-4d74-af4b-8f23ecc36bab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:20.879974   15521 system_pods.go:61] "snapshot-controller-56fcc65765-9mcdf" [bc592ae3-b020-465c-b0e9-c739e2321360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:20.879980   15521 system_pods.go:61] "storage-provisioner" [25d0944a-e6b3-429b-bb81-22672fb100bd] Running
	I0923 23:39:20.879986   15521 system_pods.go:74] duration metric: took 153.340922ms to wait for pod list to return data ...
	I0923 23:39:20.879996   15521 default_sa.go:34] waiting for default service account to be created ...
	I0923 23:39:20.918654   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.075277   15521 default_sa.go:45] found service account: "default"
	I0923 23:39:21.075308   15521 default_sa.go:55] duration metric: took 195.307316ms for default service account to be created ...
	I0923 23:39:21.075318   15521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 23:39:21.093994   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.094405   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.281184   15521 system_pods.go:86] 17 kube-system pods found
	I0923 23:39:21.281221   15521 system_pods.go:89] "coredns-7c65d6cfc9-h4m6q" [e5a66fda-ace2-434e-82fb-3d9d66fac49f] Running
	I0923 23:39:21.281233   15521 system_pods.go:89] "csi-hostpath-attacher-0" [ad0efe3a-8c72-46db-9ed8-35a46fba41f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:21.281242   15521 system_pods.go:89] "csi-hostpath-resizer-0" [e357dfe7-127b-4f18-90e3-beb7846c05cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:21.281258   15521 system_pods.go:89] "csi-hostpathplugin-l4gsf" [de45bd42-06e1-4387-ba3f-4d6a477b4823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:21.281268   15521 system_pods.go:89] "etcd-addons-823099" [c9add526-f518-4303-b016-3f95bd8c222a] Running
	I0923 23:39:21.281274   15521 system_pods.go:89] "kube-apiserver-addons-823099" [8788c6f4-114f-4c6c-928b-8ca58300c969] Running
	I0923 23:39:21.281279   15521 system_pods.go:89] "kube-controller-manager-addons-823099" [726e0154-67e9-4c92-9bac-b577104b0d12] Running
	I0923 23:39:21.281288   15521 system_pods.go:89] "kube-ingress-dns-minikube" [1194cadb-80b1-4fad-b99a-0afbc0be0b40] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 23:39:21.281293   15521 system_pods.go:89] "kube-proxy-pgclm" [3d47a25a-ab05-4197-975a-88bb7e1f9834] Running
	I0923 23:39:21.281299   15521 system_pods.go:89] "kube-scheduler-addons-823099" [193d28ff-87b2-4578-903c-e74dcea5c006] Running
	I0923 23:39:21.281306   15521 system_pods.go:89] "metrics-server-84c5f94fbc-gpzsm" [d5937c63-7f30-477a-a36e-e7e6cb8c64e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:21.281316   15521 system_pods.go:89] "nvidia-device-plugin-daemonset-2dqft" [c5e363a8-697b-4396-acf2-c41232b01445] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 23:39:21.281333   15521 system_pods.go:89] "registry-66c9cd494c-h5ntb" [67fc5fdd-03ae-44c9-8e43-0042bd142349] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:21.281341   15521 system_pods.go:89] "registry-proxy-dc579" [76bec57d-6868-4098-a291-8c38dda98afc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:21.281349   15521 system_pods.go:89] "snapshot-controller-56fcc65765-2lpn2" [6ea26c65-7a9a-4d74-af4b-8f23ecc36bab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:21.281358   15521 system_pods.go:89] "snapshot-controller-56fcc65765-9mcdf" [bc592ae3-b020-465c-b0e9-c739e2321360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:21.281363   15521 system_pods.go:89] "storage-provisioner" [25d0944a-e6b3-429b-bb81-22672fb100bd] Running
	I0923 23:39:21.281373   15521 system_pods.go:126] duration metric: took 206.049564ms to wait for k8s-apps to be running ...
	I0923 23:39:21.281382   15521 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 23:39:21.281439   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:39:21.331801   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:21.336577   15521 system_svc.go:56] duration metric: took 55.186723ms WaitForService to wait for kubelet
	I0923 23:39:21.336605   15521 kubeadm.go:582] duration metric: took 13.652846646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:39:21.336621   15521 node_conditions.go:102] verifying NodePressure condition ...
	I0923 23:39:21.419377   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.475488   15521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 23:39:21.475526   15521 node_conditions.go:123] node cpu capacity is 2
	I0923 23:39:21.475539   15521 node_conditions.go:105] duration metric: took 138.911431ms to run NodePressure ...
	I0923 23:39:21.475552   15521 start.go:241] waiting for startup goroutines ...
	I0923 23:39:21.596433   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.596900   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.832085   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:21.919995   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.094469   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.094632   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.332058   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:22.418713   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.593037   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.593680   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.906061   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.007978   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.094529   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.097114   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.332565   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.419583   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.593672   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.593683   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.838655   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.940369   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.094234   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.094445   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.332440   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:24.419984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.594437   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.594618   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.832486   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:24.919747   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.093182   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.093674   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.333709   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:25.418934   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.593328   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.593509   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.833795   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:25.919508   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.095779   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.096176   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.332478   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:26.420244   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.592803   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.592852   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.832139   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:26.919522   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.093698   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.094342   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.332730   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:27.419502   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.593345   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.593632   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.831834   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:27.921584   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.096645   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.097094   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.332417   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:28.420270   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.593381   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.594222   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.832460   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:28.920981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.094116   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.095338   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.332575   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:29.418135   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.592957   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.593378   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.832141   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:29.919193   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.094376   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.094610   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.331854   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:30.418982   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.631569   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.632124   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.831219   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:30.920259   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.093449   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.093941   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.331877   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:31.420541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.593048   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.593342   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.832378   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:31.920762   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.098506   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.099810   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.332194   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:32.420510   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.593182   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.594918   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.832529   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:32.918771   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.093326   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.094439   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.333534   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:33.419199   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.592859   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.593822   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.832270   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:33.919972   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.093090   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.093582   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.332317   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:34.419955   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.593634   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.593974   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.831974   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:34.919981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.095441   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.095574   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.332597   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:35.419105   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.597103   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.598610   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.832611   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:35.918515   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.096274   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:36.096962   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.332610   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:36.418275   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.593642   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:36.593746   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.831957   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:36.918919   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.092996   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:37.094759   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.332016   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:37.419671   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.593331   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:37.595578   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.834102   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:37.920878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.094370   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:38.095095   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.331397   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:38.419908   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.593717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:38.594107   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.832074   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:38.919327   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.100170   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:39.105269   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.332638   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:39.420123   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.593249   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:39.593947   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.832313   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:39.934720   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.101376   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.101425   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:40.333365   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:40.420009   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.594942   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:40.595025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.833104   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:40.934806   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.096251   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:41.096260   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.332277   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:41.419410   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.592946   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:41.593974   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.832170   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:41.919227   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.097743   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.098213   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:42.332232   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:42.419177   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.593758   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:42.593875   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.832085   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:42.919621   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.094464   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:43.095025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.333021   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:43.419417   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.593281   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:43.594091   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.833444   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:43.920229   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.094691   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:44.096056   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.333071   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:44.418650   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.593421   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.594195   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:44.831531   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:44.920239   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.093437   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.095439   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:45.332168   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:45.419471   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.593901   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:45.594317   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.831984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:45.919515   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.094625   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:46.094773   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:46.331386   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:46.419464   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.592656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:46.592778   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.151142   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.153387   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.154491   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:47.154846   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.332656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.418895   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.592742   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.593598   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:47.832577   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.918632   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:48.094668   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:48.094918   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:48.332151   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:48.419591   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:48.592271   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:48.593354   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:48.832266   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:48.918810   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.094750   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:49.094891   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.331944   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:49.419208   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.592843   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:49.593229   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.832432   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:49.920038   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.102686   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:50.104285   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.332178   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:50.420344   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.593984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:50.594056   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.831923   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:50.918641   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.095025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.096939   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:51.332546   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:51.419516   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.592980   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:51.594380   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.832001   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:51.921419   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.101749   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:52.102309   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.332228   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:52.419595   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.593016   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:52.593128   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.832003   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:52.919630   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.094969   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:53.095135   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.331766   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:53.418814   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.593958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:53.594088   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.832408   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:53.919175   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.098190   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:54.098600   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:54.332298   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:54.420609   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.592767   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:54.593349   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:54.832382   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:54.920230   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.094591   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:55.094839   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.332431   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:55.433787   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.593168   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:55.593371   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.832283   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:55.919461   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.093372   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:56.093870   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.331722   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:56.418785   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.594030   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:56.594601   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.833680   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:56.918880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.096144   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:57.096359   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.332149   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.418862   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.593466   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:57.593899   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.832901   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.919069   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.097832   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:58.098492   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.331809   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.419172   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.594374   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:58.594557   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.832190   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.919483   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.095468   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:59.095749   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.332135   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.419091   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.593927   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:59.594515   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.831815   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.919106   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.512087   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:00.512527   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.512554   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.513598   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.593901   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.595207   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:00.834143   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.941222   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.095958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:01.097955   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.332030   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.420181   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.593185   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:01.593891   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.832201   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.919404   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.094442   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:02.094695   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.332203   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.419407   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.592715   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:02.592806   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.831864   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.919302   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:03.093356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:03.095261   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:03.331951   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:03.419462   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:03.593257   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:03.594217   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.004211   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.007581   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:04.094485   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.096445   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:04.332624   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.418492   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:04.601985   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.615874   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:04.833660   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.918788   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:05.092856   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:05.092889   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.331911   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.419042   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:05.592983   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:05.593592   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.832164   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.930850   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:06.095313   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.095850   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:06.332770   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.419623   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:06.595241   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:06.598108   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.831586   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.923862   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:07.094981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:07.095013   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.332001   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.419422   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:07.592356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:07.592854   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.832579   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.921160   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:08.093155   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:08.093461   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.332206   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.420123   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:08.594084   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:08.594501   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.832833   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.918969   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:09.095290   15521 kapi.go:107] duration metric: took 54.006756194s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 23:40:09.096731   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.331593   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.419268   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:09.593290   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.832184   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.919379   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:10.206829   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.332592   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.418826   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:10.597305   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.833495   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.936556   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:11.093468   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.331762   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.419043   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:11.593818   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.831965   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.919356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:12.095949   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.332439   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.419717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:12.593847   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.833772   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.936727   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:13.095359   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:13.332979   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.434589   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:13.593982   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:13.833463   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.921413   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:14.107863   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:14.331881   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.418472   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:14.592625   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:14.832074   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.919102   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:15.151319   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:15.331731   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.418730   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:15.592769   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:15.832559   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.919783   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:16.094071   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:16.332982   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.420635   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:16.596117   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:16.832581   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.918622   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:17.094831   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:17.331470   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.419656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:17.594098   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:17.832476   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.918799   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:18.289234   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:18.332999   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.419337   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:18.593958   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:18.831972   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.918707   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:19.093792   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:19.332292   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.420611   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:19.593588   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:19.831910   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.918861   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:20.093950   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:20.332717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.436822   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:20.595463   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:20.832311   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.935013   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:21.096203   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:21.331541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.422657   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:21.598324   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:21.831455   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.919629   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:22.096231   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:22.331596   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.418599   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:22.609832   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:22.833773   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.935924   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:23.096601   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:23.340106   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.427732   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:23.594048   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:23.832622   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.919229   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:24.093122   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:24.331790   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.418786   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:24.593043   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:24.833183   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.918861   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:25.094139   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:25.334542   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.576086   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:25.593252   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:25.832880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.918530   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:26.092931   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:26.332596   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.419989   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:26.594948   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:26.932785   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.935292   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:27.093377   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:27.332423   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.421072   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:27.593187   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:27.832254   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.919838   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:28.093230   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:28.392143   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.687547   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:28.689317   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:28.832925   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.918921   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:29.100236   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:29.332915   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.420261   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:29.600887   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:29.833156   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.920177   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:30.093272   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:30.331488   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.418456   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:30.592224   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:30.832145   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.943704   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:31.134913   15521 kapi.go:107] duration metric: took 1m16.046381203s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 23:40:31.332777   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:31.418878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:31.831745   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:31.933578   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:32.332878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:32.418865   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:32.831981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:32.919636   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:33.331958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:33.433535   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:33.834818   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.031559   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:34.332506   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.419243   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:34.832458   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.919551   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:35.332538   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.419333   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:35.831854   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.919140   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:36.332139   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.419385   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:36.831428   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.933407   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:37.332127   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:37.419248   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:37.834890   15521 kapi.go:107] duration metric: took 1m19.006594431s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 23:40:37.837227   15521 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-823099 cluster.
	I0923 23:40:37.838804   15521 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 23:40:37.840390   15521 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 23:40:37.936294   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:38.419888   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:38.918688   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:39.419929   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:39.918705   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:40.419944   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:40.919268   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:41.418798   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:41.920203   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:42.418923   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:42.920850   15521 kapi.go:107] duration metric: took 1m25.506584753s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 23:40:42.922731   15521 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 23:40:42.924695   15521 addons.go:510] duration metric: took 1m35.240916092s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 23:40:42.924745   15521 start.go:246] waiting for cluster config update ...
	I0923 23:40:42.924763   15521 start.go:255] writing updated cluster config ...
	I0923 23:40:42.925016   15521 ssh_runner.go:195] Run: rm -f paused
	I0923 23:40:42.977325   15521 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 23:40:42.979331   15521 out.go:177] * Done! kubectl is now configured to use "addons-823099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.806070738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135398806032657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23af343f-318e-4ff3-8fe8-16e5e4d6c93e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.806695987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e14e2ea8-f2fe-450e-93e6-18c610667c04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.806814188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e14e2ea8-f2fe-450e-93e6-18c610667c04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.807662284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe,PodSandboxId:131db680f67296cb5271af15e1ba511e41c31e676930568393fa6e6881eef502,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_EXITED,CreatedAt:1727135390827270962,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e22fe967-2034-4b85-9850-7c0a8c941990,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name\
":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443dcffcbe7ea21891de78e0ea8f835d6c6a0f5377e019d2400fe1e2703d698f,PodSandboxId:12f8c57366fad50e7522c7cab6ec51d901b7a2d135e0737347a8f13766dc5600,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135360479153245,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a8d1832-b4c9-4b68-8294-f41f233
e92f8,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ada8a1f0ac99d39592e0cd6a63f64a373b1d7c0843a44045088ee28df66a987,PodSandboxId:20f1d79ffafdbbc032cccc82c590aeeaed718eb5270212b2506e6d6ad5143602,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727135357219410682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a9b9645-25c2-4e5f-a219-e0b27f57ae41,},Anno
tations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e6a222355478d417dc4174264567a5044224d0d4da5c5a92404d84f223ead,PodSandboxId:ca9c2b815acf36efe5f846a62df18958f21f3f0017981c8acb3959aa24f9de02,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135351322035615,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.
uid: 2cde65e7-b4a5-4e27-93ae-648a05fc7524,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca851386b89217de87dc1d2bcf0ed6ab4ebfd76c25e38a2bafbf570369df6ae9,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727134842488676479,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e4092638bd31f1b212f4a6f84fd14814fd80e24ec53040c8ea635bfe7624c16,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727134840196042664,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpath
plugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c517cecb3f4cc54ea08b095b21f620ec1eb9f5f631e3633050e6fbd44f7e7a95,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727134837758673960,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.ku
bernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c74e314d14a8baefda0257b71d7da0a377f50722e95533c0ff84ebf524bd162f,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTA
INER_RUNNING,CreatedAt:1727134831094246210,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a,PodSandboxId:0e2e1c3d9e9ff08c3082c50dc78ca95afbea95c76a780c02e2257fb3b9df5c28,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727134829435623480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-kl24t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f50294-29c5-4d74-8c7e-4b7b748d87b1,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},}
,&Container{Id:aa6854d34ee70018ee48f7483bf00a1cd755f52bb87542322748d86b430e6fbf,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727134822514619114,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e55047782c727ef80dce7d78f52d28752f377ef970c4027e3ba87678961abb85,PodSandboxId:2fc2cf9266e18a9762efbdbde96764c3249b621a0f7eed707828fe96274059c0,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727134820156510333,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e357dfe7-127b-4f18-90e3-beb7846c05cd,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36371c618297cb54f17070019ed5ece0b2b387fd0630a45958afebe5798ed517,PodSandboxId:8177cf10ec86908267c9d650a38795f87e5b2b8b37d3724134afae9d20b5789a,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727134818399550749,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0efe3a-8c72-46db-9ed8-35a46fba41f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb66f982718621a9821182b6ce439422220635b1cbb9a385483f77c022ccc46d,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727134816170110045,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3180bce30b58fa484e704e1271b3982fcb82d37a396af06006ef8ddc23309798,PodSandboxId:232a94536483fe56f0604193e6d4655365d8fbd720631b8f52ccf52febbd7429,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812908451437,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2lpn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea26c65-7a9a-4d74-af4b-8f23ecc36ba
b,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed612d99fc10b0bc92a190b95d5453e1d9799dc7e80c841e54b6e22053265cc6,PodSandboxId:a76adf57736ddf1af8513e5227a19242e1483af07c4523a435e6aadda7e8fc89,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812778875423,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-9mcdf,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: bc592ae3-b020-465c-b0e9-c739e2321360,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patc
h-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.p
od.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85441620b827e8c2287b9c2c68b8fbbb34c6e6a9a00e58eb7ccc24ed4da035d9,PodSandboxId:26c1ce392df493c7ffc0167abb55e5d2faca8864e5238dcd7ab4c1cd7821d3c9,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727134808667964433,Labels:map[string]string{io.kubernetes.container.name: regist
ry,io.kubernetes.pod.name: registry-66c9cd494c-h5ntb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fc5fdd-03ae-44c9-8e43-0042bd142349,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dc2c56dfc83a09a99a235659cde3008c7bd9245e286509b7ce4dbb2c714bdb,PodSandboxId:d087e6dc5feafdf527d3a58b991f89ba8a8665fc43e8886c6bead6f66b10d29c,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,Stat
e:CONTAINER_EXITED,CreatedAt:1727134795262431176,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-gtr2z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f56e80-4bd0-46bd-a36c-663eccd9d000,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:
78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0,PodSandboxId:eecd94ece05b36ebac3c92b4c96c6dead0f0d1424ecd1b684942627ef8e9
b520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727134763756022740,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1194cadb-80b1-4fad-b99a-0afbc0be0b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3f
ad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e14e2ea8-f2fe-450e-93e6-18c610667c04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.848901203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6231d133-1b6d-4f16-9b7c-d0872c1ac0dd name=/runtime.v1.RuntimeService/Version
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.848992300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6231d133-1b6d-4f16-9b7c-d0872c1ac0dd name=/runtime.v1.RuntimeService/Version
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.850171045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6b3b2e0-6ee3-4e82-8192-179ecc32bde6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.851230476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135398851206190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6b3b2e0-6ee3-4e82-8192-179ecc32bde6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.851798992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=040ebfd2-5742-4660-bfca-c6c67192adb6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.851882679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=040ebfd2-5742-4660-bfca-c6c67192adb6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.852789301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe,PodSandboxId:131db680f67296cb5271af15e1ba511e41c31e676930568393fa6e6881eef502,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_EXITED,CreatedAt:1727135390827270962,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e22fe967-2034-4b85-9850-7c0a8c941990,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name\
":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443dcffcbe7ea21891de78e0ea8f835d6c6a0f5377e019d2400fe1e2703d698f,PodSandboxId:12f8c57366fad50e7522c7cab6ec51d901b7a2d135e0737347a8f13766dc5600,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135360479153245,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a8d1832-b4c9-4b68-8294-f41f233
e92f8,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ada8a1f0ac99d39592e0cd6a63f64a373b1d7c0843a44045088ee28df66a987,PodSandboxId:20f1d79ffafdbbc032cccc82c590aeeaed718eb5270212b2506e6d6ad5143602,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727135357219410682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a9b9645-25c2-4e5f-a219-e0b27f57ae41,},Anno
tations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e6a222355478d417dc4174264567a5044224d0d4da5c5a92404d84f223ead,PodSandboxId:ca9c2b815acf36efe5f846a62df18958f21f3f0017981c8acb3959aa24f9de02,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135351322035615,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.
uid: 2cde65e7-b4a5-4e27-93ae-648a05fc7524,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca851386b89217de87dc1d2bcf0ed6ab4ebfd76c25e38a2bafbf570369df6ae9,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727134842488676479,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e4092638bd31f1b212f4a6f84fd14814fd80e24ec53040c8ea635bfe7624c16,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727134840196042664,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpath
plugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c517cecb3f4cc54ea08b095b21f620ec1eb9f5f631e3633050e6fbd44f7e7a95,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727134837758673960,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.ku
bernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c74e314d14a8baefda0257b71d7da0a377f50722e95533c0ff84ebf524bd162f,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTA
INER_RUNNING,CreatedAt:1727134831094246210,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a,PodSandboxId:0e2e1c3d9e9ff08c3082c50dc78ca95afbea95c76a780c02e2257fb3b9df5c28,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727134829435623480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-kl24t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f50294-29c5-4d74-8c7e-4b7b748d87b1,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},}
,&Container{Id:aa6854d34ee70018ee48f7483bf00a1cd755f52bb87542322748d86b430e6fbf,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727134822514619114,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e55047782c727ef80dce7d78f52d28752f377ef970c4027e3ba87678961abb85,PodSandboxId:2fc2cf9266e18a9762efbdbde96764c3249b621a0f7eed707828fe96274059c0,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727134820156510333,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e357dfe7-127b-4f18-90e3-beb7846c05cd,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36371c618297cb54f17070019ed5ece0b2b387fd0630a45958afebe5798ed517,PodSandboxId:8177cf10ec86908267c9d650a38795f87e5b2b8b37d3724134afae9d20b5789a,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727134818399550749,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0efe3a-8c72-46db-9ed8-35a46fba41f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb66f982718621a9821182b6ce439422220635b1cbb9a385483f77c022ccc46d,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727134816170110045,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3180bce30b58fa484e704e1271b3982fcb82d37a396af06006ef8ddc23309798,PodSandboxId:232a94536483fe56f0604193e6d4655365d8fbd720631b8f52ccf52febbd7429,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812908451437,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2lpn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea26c65-7a9a-4d74-af4b-8f23ecc36ba
b,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed612d99fc10b0bc92a190b95d5453e1d9799dc7e80c841e54b6e22053265cc6,PodSandboxId:a76adf57736ddf1af8513e5227a19242e1483af07c4523a435e6aadda7e8fc89,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812778875423,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-9mcdf,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: bc592ae3-b020-465c-b0e9-c739e2321360,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patc
h-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.p
od.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85441620b827e8c2287b9c2c68b8fbbb34c6e6a9a00e58eb7ccc24ed4da035d9,PodSandboxId:26c1ce392df493c7ffc0167abb55e5d2faca8864e5238dcd7ab4c1cd7821d3c9,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727134808667964433,Labels:map[string]string{io.kubernetes.container.name: regist
ry,io.kubernetes.pod.name: registry-66c9cd494c-h5ntb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fc5fdd-03ae-44c9-8e43-0042bd142349,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dc2c56dfc83a09a99a235659cde3008c7bd9245e286509b7ce4dbb2c714bdb,PodSandboxId:d087e6dc5feafdf527d3a58b991f89ba8a8665fc43e8886c6bead6f66b10d29c,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,Stat
e:CONTAINER_EXITED,CreatedAt:1727134795262431176,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-gtr2z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f56e80-4bd0-46bd-a36c-663eccd9d000,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:
78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0,PodSandboxId:eecd94ece05b36ebac3c92b4c96c6dead0f0d1424ecd1b684942627ef8e9
b520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727134763756022740,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1194cadb-80b1-4fad-b99a-0afbc0be0b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3f
ad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=040ebfd2-5742-4660-bfca-c6c67192adb6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.889552585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=920e23d7-6eeb-4720-8d62-9a1b2e6de26d name=/runtime.v1.RuntimeService/Version
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.889644329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=920e23d7-6eeb-4720-8d62-9a1b2e6de26d name=/runtime.v1.RuntimeService/Version
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.891393587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c73a558d-5fdf-4507-9a39-afc189ccfe8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.892795644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135398892725528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c73a558d-5fdf-4507-9a39-afc189ccfe8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.897150869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7a244f9-4b9f-45b4-aa8e-0f2b7929ba02 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.897219213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7a244f9-4b9f-45b4-aa8e-0f2b7929ba02 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.897825447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe,PodSandboxId:131db680f67296cb5271af15e1ba511e41c31e676930568393fa6e6881eef502,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_EXITED,CreatedAt:1727135390827270962,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e22fe967-2034-4b85-9850-7c0a8c941990,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name\
":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443dcffcbe7ea21891de78e0ea8f835d6c6a0f5377e019d2400fe1e2703d698f,PodSandboxId:12f8c57366fad50e7522c7cab6ec51d901b7a2d135e0737347a8f13766dc5600,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135360479153245,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a8d1832-b4c9-4b68-8294-f41f233
e92f8,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ada8a1f0ac99d39592e0cd6a63f64a373b1d7c0843a44045088ee28df66a987,PodSandboxId:20f1d79ffafdbbc032cccc82c590aeeaed718eb5270212b2506e6d6ad5143602,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727135357219410682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a9b9645-25c2-4e5f-a219-e0b27f57ae41,},Anno
tations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e6a222355478d417dc4174264567a5044224d0d4da5c5a92404d84f223ead,PodSandboxId:ca9c2b815acf36efe5f846a62df18958f21f3f0017981c8acb3959aa24f9de02,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135351322035615,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.
uid: 2cde65e7-b4a5-4e27-93ae-648a05fc7524,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca851386b89217de87dc1d2bcf0ed6ab4ebfd76c25e38a2bafbf570369df6ae9,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727134842488676479,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e4092638bd31f1b212f4a6f84fd14814fd80e24ec53040c8ea635bfe7624c16,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727134840196042664,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpath
plugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c517cecb3f4cc54ea08b095b21f620ec1eb9f5f631e3633050e6fbd44f7e7a95,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727134837758673960,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.ku
bernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c74e314d14a8baefda0257b71d7da0a377f50722e95533c0ff84ebf524bd162f,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTA
INER_RUNNING,CreatedAt:1727134831094246210,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a,PodSandboxId:0e2e1c3d9e9ff08c3082c50dc78ca95afbea95c76a780c02e2257fb3b9df5c28,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727134829435623480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-kl24t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f50294-29c5-4d74-8c7e-4b7b748d87b1,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},}
,&Container{Id:aa6854d34ee70018ee48f7483bf00a1cd755f52bb87542322748d86b430e6fbf,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727134822514619114,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e55047782c727ef80dce7d78f52d28752f377ef970c4027e3ba87678961abb85,PodSandboxId:2fc2cf9266e18a9762efbdbde96764c3249b621a0f7eed707828fe96274059c0,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727134820156510333,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e357dfe7-127b-4f18-90e3-beb7846c05cd,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36371c618297cb54f17070019ed5ece0b2b387fd0630a45958afebe5798ed517,PodSandboxId:8177cf10ec86908267c9d650a38795f87e5b2b8b37d3724134afae9d20b5789a,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727134818399550749,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0efe3a-8c72-46db-9ed8-35a46fba41f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb66f982718621a9821182b6ce439422220635b1cbb9a385483f77c022ccc46d,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727134816170110045,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3180bce30b58fa484e704e1271b3982fcb82d37a396af06006ef8ddc23309798,PodSandboxId:232a94536483fe56f0604193e6d4655365d8fbd720631b8f52ccf52febbd7429,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812908451437,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2lpn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea26c65-7a9a-4d74-af4b-8f23ecc36ba
b,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed612d99fc10b0bc92a190b95d5453e1d9799dc7e80c841e54b6e22053265cc6,PodSandboxId:a76adf57736ddf1af8513e5227a19242e1483af07c4523a435e6aadda7e8fc89,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812778875423,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-9mcdf,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: bc592ae3-b020-465c-b0e9-c739e2321360,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patc
h-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.p
od.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85441620b827e8c2287b9c2c68b8fbbb34c6e6a9a00e58eb7ccc24ed4da035d9,PodSandboxId:26c1ce392df493c7ffc0167abb55e5d2faca8864e5238dcd7ab4c1cd7821d3c9,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727134808667964433,Labels:map[string]string{io.kubernetes.container.name: regist
ry,io.kubernetes.pod.name: registry-66c9cd494c-h5ntb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fc5fdd-03ae-44c9-8e43-0042bd142349,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dc2c56dfc83a09a99a235659cde3008c7bd9245e286509b7ce4dbb2c714bdb,PodSandboxId:d087e6dc5feafdf527d3a58b991f89ba8a8665fc43e8886c6bead6f66b10d29c,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,Stat
e:CONTAINER_EXITED,CreatedAt:1727134795262431176,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-gtr2z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f56e80-4bd0-46bd-a36c-663eccd9d000,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:
78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0,PodSandboxId:eecd94ece05b36ebac3c92b4c96c6dead0f0d1424ecd1b684942627ef8e9
b520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727134763756022740,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1194cadb-80b1-4fad-b99a-0afbc0be0b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3f
ad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7a244f9-4b9f-45b4-aa8e-0f2b7929ba02 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.934946832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acb5f1b2-4c94-4890-8496-e8e3008af069 name=/runtime.v1.RuntimeService/Version
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.935032062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acb5f1b2-4c94-4890-8496-e8e3008af069 name=/runtime.v1.RuntimeService/Version
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.936254083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55531bb7-dbc5-4c43-8d1f-914724c5f5f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.937297789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135398937270823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519755,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55531bb7-dbc5-4c43-8d1f-914724c5f5f5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.937967491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=433e6d84-2759-41cd-9641-ff703195d4aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.938040669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=433e6d84-2759-41cd-9641-ff703195d4aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:49:58 addons-823099 crio[662]: time="2024-09-23 23:49:58.938587108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe,PodSandboxId:131db680f67296cb5271af15e1ba511e41c31e676930568393fa6e6881eef502,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,State:CONTAINER_EXITED,CreatedAt:1727135390827270962,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod-restore,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e22fe967-2034-4b85-9850-7c0a8c941990,},Annotations:map[string]string{io.kubernetes.container.hash: 44be65c1,io.kubernetes.container.ports: [{\"name\
":\"http-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443dcffcbe7ea21891de78e0ea8f835d6c6a0f5377e019d2400fe1e2703d698f,PodSandboxId:12f8c57366fad50e7522c7cab6ec51d901b7a2d135e0737347a8f13766dc5600,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135360479153245,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a8d1832-b4c9-4b68-8294-f41f233
e92f8,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ada8a1f0ac99d39592e0cd6a63f64a373b1d7c0843a44045088ee28df66a987,PodSandboxId:20f1d79ffafdbbc032cccc82c590aeeaed718eb5270212b2506e6d6ad5143602,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1727135357219410682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a9b9645-25c2-4e5f-a219-e0b27f57ae41,},Anno
tations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c9e6a222355478d417dc4174264567a5044224d0d4da5c5a92404d84f223ead,PodSandboxId:ca9c2b815acf36efe5f846a62df18958f21f3f0017981c8acb3959aa24f9de02,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1727135351322035615,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.
uid: 2cde65e7-b4a5-4e27-93ae-648a05fc7524,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca851386b89217de87dc1d2bcf0ed6ab4ebfd76c25e38a2bafbf570369df6ae9,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1727134842488676479,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e4092638bd31f1b212f4a6f84fd14814fd80e24ec53040c8ea635bfe7624c16,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1727134840196042664,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpath
plugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c517cecb3f4cc54ea08b095b21f620ec1eb9f5f631e3633050e6fbd44f7e7a95,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1727134837758673960,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.ku
bernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c74e314d14a8baefda0257b71d7da0a377f50722e95533c0ff84ebf524bd162f,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTA
INER_RUNNING,CreatedAt:1727134831094246210,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a,PodSandboxId:0e2e1c3d9e9ff08c3082c50dc78ca95afbea95c76a780c02e2257fb3b9df5c28,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727134829435623480,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-kl24t,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f50294-29c5-4d74-8c7e-4b7b748d87b1,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},}
,&Container{Id:aa6854d34ee70018ee48f7483bf00a1cd755f52bb87542322748d86b430e6fbf,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1727134822514619114,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e55047782c727ef80dce7d78f52d28752f377ef970c4027e3ba87678961abb85,PodSandboxId:2fc2cf9266e18a9762efbdbde96764c3249b621a0f7eed707828fe96274059c0,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1727134820156510333,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e357dfe7-127b-4f18-90e3-beb7846c05cd,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36371c618297cb54f17070019ed5ece0b2b387fd0630a45958afebe5798ed517,PodSandboxId:8177cf10ec86908267c9d650a38795f87e5b2b8b37d3724134afae9d20b5789a,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1727134818399550749,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0efe3a-8c72-46db-9ed8-35a46fba41f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb66f982718621a9821182b6ce439422220635b1cbb9a385483f77c022ccc46d,PodSandboxId:d09d8eceb3bd2639e7415f16d245be632ae86ee3f3066a0f6d4977def5ecd505,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1727134816170110045,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-l4gsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de45bd42-06e1-4387-ba3f-4d6a477b4823,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3180bce30b58fa484e704e1271b3982fcb82d37a396af06006ef8ddc23309798,PodSandboxId:232a94536483fe56f0604193e6d4655365d8fbd720631b8f52ccf52febbd7429,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812908451437,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2lpn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea26c65-7a9a-4d74-af4b-8f23ecc36ba
b,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed612d99fc10b0bc92a190b95d5453e1d9799dc7e80c841e54b6e22053265cc6,PodSandboxId:a76adf57736ddf1af8513e5227a19242e1483af07c4523a435e6aadda7e8fc89,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1727134812778875423,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-9mcdf,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: bc592ae3-b020-465c-b0e9-c739e2321360,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patc
h-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.p
od.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85441620b827e8c2287b9c2c68b8fbbb34c6e6a9a00e58eb7ccc24ed4da035d9,PodSandboxId:26c1ce392df493c7ffc0167abb55e5d2faca8864e5238dcd7ab4c1cd7821d3c9,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1727134808667964433,Labels:map[string]string{io.kubernetes.container.name: regist
ry,io.kubernetes.pod.name: registry-66c9cd494c-h5ntb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67fc5fdd-03ae-44c9-8e43-0042bd142349,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dc2c56dfc83a09a99a235659cde3008c7bd9245e286509b7ce4dbb2c714bdb,PodSandboxId:d087e6dc5feafdf527d3a58b991f89ba8a8665fc43e8886c6bead6f66b10d29c,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,Stat
e:CONTAINER_EXITED,CreatedAt:1727134795262431176,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-gtr2z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f56e80-4bd0-46bd-a36c-663eccd9d000,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:
78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0,PodSandboxId:eecd94ece05b36ebac3c92b4c96c6dead0f0d1424ecd1b684942627ef8e9
b520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727134763756022740,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1194cadb-80b1-4fad-b99a-0afbc0be0b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3f
ad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=433e6d84-2759-41cd-9641-ff703195d4aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0aefb55ebf76d       docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                                              8 seconds ago       Exited              task-pv-container                        0                   131db680f6729       task-pv-pod-restore
	443dcffcbe7ea       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             38 seconds ago      Exited              helper-pod                               0                   12f8c57366fad       helper-pod-delete-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e
	0ada8a1f0ac99       docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f                                            41 seconds ago      Exited              busybox                                  0                   20f1d79ffafdb       test-local-path
	9c9e6a2223554       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            47 seconds ago      Exited              helper-pod                               0                   ca9c2b815acf3       helper-pod-create-pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e
	ca851386b8921       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   d09d8eceb3bd2       csi-hostpathplugin-l4gsf
	7e4092638bd31       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          9 minutes ago       Running             csi-provisioner                          0                   d09d8eceb3bd2       csi-hostpathplugin-l4gsf
	c517cecb3f4cc       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            9 minutes ago       Running             liveness-probe                           0                   d09d8eceb3bd2       csi-hostpathplugin-l4gsf
	74b1f1c0ea595       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 9 minutes ago       Running             gcp-auth                                 0                   ef699a0a58d26       gcp-auth-89d5ffd79-5p9gw
	c74e314d14a8b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   d09d8eceb3bd2       csi-hostpathplugin-l4gsf
	f9c92c116a3db       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             9 minutes ago       Running             controller                               0                   0e2e1c3d9e9ff       ingress-nginx-controller-bc57996ff-kl24t
	aa6854d34ee70       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   d09d8eceb3bd2       csi-hostpathplugin-l4gsf
	e55047782c727       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              9 minutes ago       Running             csi-resizer                              0                   2fc2cf9266e18       csi-hostpath-resizer-0
	36371c618297c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             9 minutes ago       Running             csi-attacher                             0                   8177cf10ec869       csi-hostpath-attacher-0
	cb66f98271862       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   9 minutes ago       Running             csi-external-health-monitor-controller   0                   d09d8eceb3bd2       csi-hostpathplugin-l4gsf
	3180bce30b58f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   232a94536483f       snapshot-controller-56fcc65765-2lpn2
	ed612d99fc10b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   a76adf57736dd       snapshot-controller-56fcc65765-9mcdf
	78216b74033c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   9 minutes ago       Exited              patch                                    0                   1d959a02614bb       ingress-nginx-admission-patch-2wnc4
	2dd0ccb8e4dd8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   9 minutes ago       Exited              create                                   0                   4b432a2934133       ingress-nginx-admission-create-t6hw4
	85441620b827e       docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7                                           9 minutes ago       Exited              registry                                 0                   26c1ce392df49       registry-66c9cd494c-h5ntb
	31dc2c56dfc83       gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf                               10 minutes ago      Exited              cloud-spanner-emulator                   0                   d087e6dc5feaf       cloud-spanner-emulator-5b584cc74-gtr2z
	ad6c45d33f367       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        10 minutes ago      Running             metrics-server                           0                   c24a2665a62ab       metrics-server-84c5f94fbc-gpzsm
	9bcdf5e1463fb       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             10 minutes ago      Running             minikube-ingress-dns                     0                   eecd94ece05b3       kube-ingress-dns-minikube
	9490eb926210d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             10 minutes ago      Running             storage-provisioner                      0                   8d3fbd5782869       storage-provisioner
	4fec583ae4c3f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             10 minutes ago      Running             coredns                                  0                   743b6ef05346b       coredns-7c65d6cfc9-h4m6q
	8a92c92c6afdd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             10 minutes ago      Running             kube-proxy                               0                   e2cf37b2ed960       kube-proxy-pgclm
	f68819f7bf59d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             11 minutes ago      Running             kube-controller-manager                  0                   9992b2a049a9e       kube-controller-manager-addons-823099
	474072cb31ae5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             11 minutes ago      Running             kube-apiserver                           0                   4c45c732428c2       kube-apiserver-addons-823099
	9f9a68d35a007       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             11 minutes ago      Running             etcd                                     0                   6600118fb556e       etcd-addons-823099
	61a194a33123e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             11 minutes ago      Running             kube-scheduler                           0                   858af16c1b974       kube-scheduler-addons-823099
	
	
	==> coredns [4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de] <==
	[INFO] 10.244.0.5:51161 - 56746 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000030131s
	[INFO] 10.244.0.5:59845 - 51818 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171416s
	[INFO] 10.244.0.5:59845 - 28005 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00029495s
	[INFO] 10.244.0.5:48681 - 19317 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123377s
	[INFO] 10.244.0.5:48681 - 63336 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044081s
	[INFO] 10.244.0.5:58061 - 30895 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008088s
	[INFO] 10.244.0.5:58061 - 32689 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035396s
	[INFO] 10.244.0.5:38087 - 48114 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035784s
	[INFO] 10.244.0.5:38087 - 54000 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095969s
	[INFO] 10.244.0.5:49683 - 11480 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000140959s
	[INFO] 10.244.0.5:49683 - 23003 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101135s
	[INFO] 10.244.0.5:43005 - 38126 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081593s
	[INFO] 10.244.0.5:43005 - 47596 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124387s
	[INFO] 10.244.0.5:55804 - 41138 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171789s
	[INFO] 10.244.0.5:55804 - 44976 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000182833s
	[INFO] 10.244.0.5:43069 - 16307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089434s
	[INFO] 10.244.0.5:43069 - 51633 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000032833s
	[INFO] 10.244.0.21:46303 - 62968 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000606757s
	[INFO] 10.244.0.21:36097 - 35733 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000696905s
	[INFO] 10.244.0.21:56566 - 45315 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136557s
	[INFO] 10.244.0.21:57939 - 56430 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000207858s
	[INFO] 10.244.0.21:51280 - 40828 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010631s
	[INFO] 10.244.0.21:50116 - 49864 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098666s
	[INFO] 10.244.0.21:45441 - 35920 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001078608s
	[INFO] 10.244.0.21:48980 - 17136 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00159345s
	
	
	==> describe nodes <==
	Name:               addons-823099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-823099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=addons-823099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T23_39_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-823099
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-823099"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 23:39:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-823099
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 23:49:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 23:49:45 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 23:49:45 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 23:49:45 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 23:49:45 +0000   Mon, 23 Sep 2024 23:39:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    addons-823099
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a6fccd6b081441ba6dbe75955b7b20d
	  System UUID:                8a6fccd6-b081-441b-a6db-e75955b7b20d
	  Boot ID:                    cf9ab547-5350-4131-950e-b30d60dc335d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  gcp-auth                    gcp-auth-89d5ffd79-5p9gw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-kl24t    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-h4m6q                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-l4gsf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-823099                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-823099                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-823099       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-pgclm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-823099                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-gpzsm             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-2lpn2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-9mcdf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x6 over 11m)  kubelet          Node addons-823099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x6 over 11m)  kubelet          Node addons-823099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x5 over 11m)  kubelet          Node addons-823099 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-823099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-823099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-823099 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-823099 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-823099 event: Registered Node addons-823099 in Controller
	
	
	==> dmesg <==
	[  +0.098296] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.257718] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.145569] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.014575] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.264434] kauditd_printk_skb: 126 callbacks suppressed
	[  +5.568231] kauditd_printk_skb: 64 callbacks suppressed
	[Sep23 23:40] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.362606] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.102278] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.160095] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.116043] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.170005] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.039714] kauditd_printk_skb: 15 callbacks suppressed
	[Sep23 23:41] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:46] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:48] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.347854] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 23:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.843786] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.530829] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.922969] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.060717] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.416967] kauditd_printk_skb: 2 callbacks suppressed
	[ +21.356928] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7] <==
	{"level":"info","ts":"2024-09-23T23:40:18.273716Z","caller":"traceutil/trace.go:171","msg":"trace[972106571] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1021; }","duration":"191.281577ms","start":"2024-09-23T23:40:18.082424Z","end":"2024-09-23T23:40:18.273706Z","steps":["trace[972106571] 'agreement among raft nodes before linearized reading'  (duration: 190.294044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:18.273903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.789948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:18.273944Z","caller":"traceutil/trace.go:171","msg":"trace[2056340658] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1021; }","duration":"193.834479ms","start":"2024-09-23T23:40:18.080103Z","end":"2024-09-23T23:40:18.273938Z","steps":["trace[2056340658] 'agreement among raft nodes before linearized reading'  (duration: 193.772634ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:25.558803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.355384ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:25.558932Z","caller":"traceutil/trace.go:171","msg":"trace[382616305] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1076; }","duration":"101.503965ms","start":"2024-09-23T23:40:25.457416Z","end":"2024-09-23T23:40:25.558920Z","steps":["trace[382616305] 'range keys from in-memory index tree'  (duration: 101.341239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:25.559071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.046813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:25.559087Z","caller":"traceutil/trace.go:171","msg":"trace[1791543403] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1076; }","duration":"142.072858ms","start":"2024-09-23T23:40:25.417009Z","end":"2024-09-23T23:40:25.559082Z","steps":["trace[1791543403] 'range keys from in-memory index tree'  (duration: 141.82946ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:40:26.916555Z","caller":"traceutil/trace.go:171","msg":"trace[2109487072] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"268.210946ms","start":"2024-09-23T23:40:26.648276Z","end":"2024-09-23T23:40:26.916487Z","steps":["trace[2109487072] 'process raft request'  (duration: 267.388524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:28.672019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.58612ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:28.672067Z","caller":"traceutil/trace.go:171","msg":"trace[1280673185] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"266.654436ms","start":"2024-09-23T23:40:28.405402Z","end":"2024-09-23T23:40:28.672056Z","steps":["trace[1280673185] 'range keys from in-memory index tree'  (duration: 266.517094ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:34.016260Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.157586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:34.016381Z","caller":"traceutil/trace.go:171","msg":"trace[611313623] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"112.292285ms","start":"2024-09-23T23:40:33.904078Z","end":"2024-09-23T23:40:34.016370Z","steps":["trace[611313623] 'range keys from in-memory index tree'  (duration: 112.015896ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:51.932772Z","caller":"traceutil/trace.go:171","msg":"trace[938951164] linearizableReadLoop","detail":"{readStateIndex:2080; appliedIndex:2079; }","duration":"354.39626ms","start":"2024-09-23T23:48:51.578308Z","end":"2024-09-23T23:48:51.932704Z","steps":["trace[938951164] 'read index received'  (duration: 354.29951ms)","trace[938951164] 'applied index is now lower than readState.Index'  (duration: 95.406µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T23:48:51.932778Z","caller":"traceutil/trace.go:171","msg":"trace[488687598] transaction","detail":"{read_only:false; response_revision:1941; number_of_response:1; }","duration":"381.82676ms","start":"2024-09-23T23:48:51.550869Z","end":"2024-09-23T23:48:51.932696Z","steps":["trace[488687598] 'process raft request'  (duration: 381.708173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.933377Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T23:48:51.550851Z","time spent":"382.387698ms","remote":"127.0.0.1:42030","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1940 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-23T23:48:51.933874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.560182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-09-23T23:48:51.934178Z","caller":"traceutil/trace.go:171","msg":"trace[470184168] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:1941; }","duration":"355.861174ms","start":"2024-09-23T23:48:51.578304Z","end":"2024-09-23T23:48:51.934165Z","steps":["trace[470184168] 'agreement among raft nodes before linearized reading'  (duration: 355.490488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.934287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T23:48:51.578272Z","time spent":"356.004044ms","remote":"127.0.0.1:41984","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":597,"request content":"key:\"/registry/namespaces/gadget\" "}
	{"level":"warn","ts":"2024-09-23T23:48:51.934489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.892301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:48:51.937084Z","caller":"traceutil/trace.go:171","msg":"trace[537662719] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:1941; }","duration":"222.90761ms","start":"2024-09-23T23:48:51.714161Z","end":"2024-09-23T23:48:51.937069Z","steps":["trace[537662719] 'agreement among raft nodes before linearized reading'  (duration: 218.829971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.934806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.030994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:48:51.937364Z","caller":"traceutil/trace.go:171","msg":"trace[1456298499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1941; }","duration":"140.590398ms","start":"2024-09-23T23:48:51.796765Z","end":"2024-09-23T23:48:51.937356Z","steps":["trace[1456298499] 'agreement among raft nodes before linearized reading'  (duration: 138.021442ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:58.904119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1508}
	{"level":"info","ts":"2024-09-23T23:48:58.947160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1508,"took":"42.440235ms","hash":2968136522,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3551232,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T23:48:58.947271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2968136522,"revision":1508,"compact-revision":-1}
	
	
	==> gcp-auth [74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab] <==
	2024/09/23 23:40:36 GCP Auth Webhook started!
	2024/09/23 23:40:43 Ready to marshal response ...
	2024/09/23 23:40:43 Ready to write response ...
	2024/09/23 23:40:43 Ready to marshal response ...
	2024/09/23 23:40:43 Ready to write response ...
	2024/09/23 23:40:43 Ready to marshal response ...
	2024/09/23 23:40:43 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:57 Ready to marshal response ...
	2024/09/23 23:48:57 Ready to write response ...
	2024/09/23 23:49:08 Ready to marshal response ...
	2024/09/23 23:49:08 Ready to write response ...
	2024/09/23 23:49:08 Ready to marshal response ...
	2024/09/23 23:49:08 Ready to write response ...
	2024/09/23 23:49:19 Ready to marshal response ...
	2024/09/23 23:49:19 Ready to write response ...
	2024/09/23 23:49:28 Ready to marshal response ...
	2024/09/23 23:49:28 Ready to write response ...
	2024/09/23 23:49:49 Ready to marshal response ...
	2024/09/23 23:49:49 Ready to write response ...
	
	
	==> kernel <==
	 23:49:59 up 11 min,  0 users,  load average: 0.58, 0.50, 0.41
	Linux addons-823099 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2] <==
	I0923 23:39:16.717072       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.196.1"}
	I0923 23:39:16.747273       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0923 23:39:16.954453       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.78.59"}
	I0923 23:39:18.568394       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.11.26"}
	W0923 23:40:14.768903       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 23:40:14.768983       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0923 23:40:14.769305       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 23:40:14.769491       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0923 23:40:14.771064       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 23:40:14.771153       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0923 23:40:43.867382       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 23:40:43.867485       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0923 23:40:43.868287       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.117.164:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.117.164:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.117.164:443: connect: connection refused" logger="UnhandledError"
	I0923 23:40:43.895370       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 23:48:46.697802       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.162.121"}
	I0923 23:48:52.010546       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 23:48:53.182523       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0923 23:49:35.935062       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 23:49:42.049169       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6] <==
	I0923 23:48:53.198450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="58.093µs"
	W0923 23:48:54.100704       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:48:54.100906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:48:55.817281       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:48:55.817334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:48:59.485710       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:48:59.485809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:49:00.322632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="10.445µs"
	I0923 23:49:02.270359       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0923 23:49:02.994865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="7.284µs"
	I0923 23:49:04.372123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-823099"
	W0923 23:49:06.812239       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:06.812293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:49:07.117934       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0923 23:49:07.117971       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 23:49:07.677512       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 23:49:07.677610       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 23:49:10.422960       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0923 23:49:13.095155       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0923 23:49:16.591303       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="7.339µs"
	I0923 23:49:20.627915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="7.693µs"
	W0923 23:49:24.868552       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:24.868702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:49:45.471087       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-823099"
	I0923 23:49:57.665989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.013µs"
	
	
	==> kube-proxy [8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 23:39:10.263461       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 23:39:10.290295       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.29"]
	E0923 23:39:10.290387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:39:10.374009       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 23:39:10.374057       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 23:39:10.374082       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:39:10.378689       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:39:10.379053       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:39:10.379077       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:39:10.380385       1 config.go:199] "Starting service config controller"
	I0923 23:39:10.380428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:39:10.380516       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:39:10.380522       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:39:10.381090       1 config.go:328] "Starting node config controller"
	I0923 23:39:10.381097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:39:10.480784       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 23:39:10.480823       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:39:10.481148       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960] <==
	W0923 23:39:00.991081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 23:39:00.991286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:00.992946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 23:39:00.993078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.018368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 23:39:01.018501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.040390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 23:39:01.040489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.048983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.049065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.052890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 23:39:01.053031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.108077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 23:39:01.108124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.219095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.219241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.237429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.237504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.286444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 23:39:01.286579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.476657       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:39:01.476716       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 23:39:01.491112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 23:39:01.491224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 23:39:03.306204       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.017271    1203 scope.go:117] "RemoveContainer" containerID="bfd9dce015f64fa0cb9c9167bda0480e9b26a69cceee004a0f7433bd50bdfe6e"
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.057705    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l254\" (UniqueName: \"kubernetes.io/projected/76bec57d-6868-4098-a291-8c38dda98afc-kube-api-access-2l254\") pod \"76bec57d-6868-4098-a291-8c38dda98afc\" (UID: \"76bec57d-6868-4098-a291-8c38dda98afc\") "
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.063172    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76bec57d-6868-4098-a291-8c38dda98afc-kube-api-access-2l254" (OuterVolumeSpecName: "kube-api-access-2l254") pod "76bec57d-6868-4098-a291-8c38dda98afc" (UID: "76bec57d-6868-4098-a291-8c38dda98afc"). InnerVolumeSpecName "kube-api-access-2l254". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.158844    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ps6t\" (UniqueName: \"kubernetes.io/projected/67fc5fdd-03ae-44c9-8e43-0042bd142349-kube-api-access-9ps6t\") pod \"67fc5fdd-03ae-44c9-8e43-0042bd142349\" (UID: \"67fc5fdd-03ae-44c9-8e43-0042bd142349\") "
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.161950    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2l254\" (UniqueName: \"kubernetes.io/projected/76bec57d-6868-4098-a291-8c38dda98afc-kube-api-access-2l254\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.162940    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fc5fdd-03ae-44c9-8e43-0042bd142349-kube-api-access-9ps6t" (OuterVolumeSpecName: "kube-api-access-9ps6t") pod "67fc5fdd-03ae-44c9-8e43-0042bd142349" (UID: "67fc5fdd-03ae-44c9-8e43-0042bd142349"). InnerVolumeSpecName "kube-api-access-9ps6t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.263521    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9ps6t\" (UniqueName: \"kubernetes.io/projected/67fc5fdd-03ae-44c9-8e43-0042bd142349-kube-api-access-9ps6t\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.670216    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kwjg\" (UniqueName: \"kubernetes.io/projected/e22fe967-2034-4b85-9850-7c0a8c941990-kube-api-access-7kwjg\") pod \"e22fe967-2034-4b85-9850-7c0a8c941990\" (UID: \"e22fe967-2034-4b85-9850-7c0a8c941990\") "
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.670474    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e22fe967-2034-4b85-9850-7c0a8c941990-gcp-creds\") pod \"e22fe967-2034-4b85-9850-7c0a8c941990\" (UID: \"e22fe967-2034-4b85-9850-7c0a8c941990\") "
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.670642    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^84c9a13a-7a06-11ef-bfeb-1a5968f11029\") pod \"e22fe967-2034-4b85-9850-7c0a8c941990\" (UID: \"e22fe967-2034-4b85-9850-7c0a8c941990\") "
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.671007    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e22fe967-2034-4b85-9850-7c0a8c941990-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e22fe967-2034-4b85-9850-7c0a8c941990" (UID: "e22fe967-2034-4b85-9850-7c0a8c941990"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.672685    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22fe967-2034-4b85-9850-7c0a8c941990-kube-api-access-7kwjg" (OuterVolumeSpecName: "kube-api-access-7kwjg") pod "e22fe967-2034-4b85-9850-7c0a8c941990" (UID: "e22fe967-2034-4b85-9850-7c0a8c941990"). InnerVolumeSpecName "kube-api-access-7kwjg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.675228    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^84c9a13a-7a06-11ef-bfeb-1a5968f11029" (OuterVolumeSpecName: "task-pv-storage") pod "e22fe967-2034-4b85-9850-7c0a8c941990" (UID: "e22fe967-2034-4b85-9850-7c0a8c941990"). InnerVolumeSpecName "pvc-0d321e8e-fdf0-45f4-827d-31c0b6a0b833". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.717147    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55eede92-3ba9-4577-a5a3-ca22cd3fa01a" path="/var/lib/kubelet/pods/55eede92-3ba9-4577-a5a3-ca22cd3fa01a/volumes"
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.717456    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76bec57d-6868-4098-a291-8c38dda98afc" path="/var/lib/kubelet/pods/76bec57d-6868-4098-a291-8c38dda98afc/volumes"
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.772325    1203 reconciler_common.go:281] "operationExecutor.UnmountDevice started for volume \"pvc-0d321e8e-fdf0-45f4-827d-31c0b6a0b833\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^84c9a13a-7a06-11ef-bfeb-1a5968f11029\") on node \"addons-823099\" "
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.772379    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7kwjg\" (UniqueName: \"kubernetes.io/projected/e22fe967-2034-4b85-9850-7c0a8c941990-kube-api-access-7kwjg\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.772393    1203 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e22fe967-2034-4b85-9850-7c0a8c941990-gcp-creds\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.776932    1203 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d321e8e-fdf0-45f4-827d-31c0b6a0b833" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^84c9a13a-7a06-11ef-bfeb-1a5968f11029") on node "addons-823099"
	Sep 23 23:49:58 addons-823099 kubelet[1203]: I0923 23:49:58.873140    1203 reconciler_common.go:288] "Volume detached for volume \"pvc-0d321e8e-fdf0-45f4-827d-31c0b6a0b833\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^84c9a13a-7a06-11ef-bfeb-1a5968f11029\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:49:59 addons-823099 kubelet[1203]: I0923 23:49:59.025098    1203 scope.go:117] "RemoveContainer" containerID="0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe"
	Sep 23 23:49:59 addons-823099 kubelet[1203]: I0923 23:49:59.071669    1203 scope.go:117] "RemoveContainer" containerID="0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe"
	Sep 23 23:49:59 addons-823099 kubelet[1203]: E0923 23:49:59.074337    1203 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe\": container with ID starting with 0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe not found: ID does not exist" containerID="0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe"
	Sep 23 23:49:59 addons-823099 kubelet[1203]: I0923 23:49:59.074437    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe"} err="failed to get container status \"0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe\": rpc error: code = NotFound desc = could not find container \"0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe\": container with ID starting with 0aefb55ebf76d535cf063c470a33e1e165bea82506b8c5bcb512b10d908b6bfe not found: ID does not exist"
	Sep 23 23:49:59 addons-823099 kubelet[1203]: I0923 23:49:59.074518    1203 scope.go:117] "RemoveContainer" containerID="85441620b827e8c2287b9c2c68b8fbbb34c6e6a9a00e58eb7ccc24ed4da035d9"
	
	
	==> storage-provisioner [9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b] <==
	I0923 23:39:15.614353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 23:39:15.653018       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 23:39:15.653076       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 23:39:15.688317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 23:39:15.688955       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68f268eb-f84e-4f3d-800b-baa6449c8a15", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9 became leader
	I0923 23:39:15.689716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9!
	I0923 23:39:15.790555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-823099 -n addons-823099
helpers_test.go:261: (dbg) Run:  kubectl --context addons-823099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-t6hw4 ingress-nginx-admission-patch-2wnc4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-823099 describe pod busybox ingress-nginx-admission-create-t6hw4 ingress-nginx-admission-patch-2wnc4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-823099 describe pod busybox ingress-nginx-admission-create-t6hw4 ingress-nginx-admission-patch-2wnc4: exit status 1 (161.121384ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-823099/192.168.39.29
	Start Time:       Mon, 23 Sep 2024 23:40:43 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvbxz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nvbxz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-823099
	  Normal   Pulling    7m42s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t6hw4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wnc4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-823099 describe pod busybox ingress-nginx-admission-create-t6hw4 ingress-nginx-admission-patch-2wnc4: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.56s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-823099 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-823099 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-823099 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4679ef89-e297-4f54-bf30-b685a88ec238] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4679ef89-e297-4f54-bf30-b685a88ec238] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003859017s
I0923 23:50:12.338929   14793 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-823099 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.664602977s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-823099 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.29
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 addons disable ingress-dns --alsologtostderr -v=1: (1.421247486s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 addons disable ingress --alsologtostderr -v=1: (7.803824742s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-823099 -n addons-823099
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 logs -n 25: (1.272597875s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-446089                                                                     | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-098425                                                                     | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-446089                                                                     | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-013301 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | binary-mirror-013301                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39559                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-013301                                                                     | binary-mirror-013301 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-823099 --wait=true                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:48 UTC |
	|         | -p addons-823099                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:48 UTC |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | -p addons-823099                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-823099 ssh cat                                                                       | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | /opt/local-path-provisioner/pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-823099 ip                                                                            | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823099 addons                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823099 addons                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-823099 ssh curl -s                                                                   | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-823099 ip                                                                            | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:52 UTC | 23 Sep 24 23:52 UTC |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:52 UTC | 23 Sep 24 23:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:52 UTC | 23 Sep 24 23:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:38:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:38:22.858727   15521 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:38:22.858952   15521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:22.858959   15521 out.go:358] Setting ErrFile to fd 2...
	I0923 23:38:22.858964   15521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:22.859165   15521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:38:22.859782   15521 out.go:352] Setting JSON to false
	I0923 23:38:22.860641   15521 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1247,"bootTime":1727133456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:38:22.860727   15521 start.go:139] virtualization: kvm guest
	I0923 23:38:22.862749   15521 out.go:177] * [addons-823099] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:38:22.863989   15521 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:38:22.863991   15521 notify.go:220] Checking for updates...
	I0923 23:38:22.865162   15521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:38:22.866358   15521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:38:22.867535   15521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:22.868620   15521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:38:22.869743   15521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:38:22.870899   15521 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:38:22.903588   15521 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 23:38:22.904660   15521 start.go:297] selected driver: kvm2
	I0923 23:38:22.904673   15521 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:38:22.904687   15521 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:38:22.905400   15521 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:22.905500   15521 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:38:22.920929   15521 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:38:22.920979   15521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:38:22.921207   15521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:38:22.921237   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:38:22.921285   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:38:22.921293   15521 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:38:22.921344   15521 start.go:340] cluster config:
	{Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:22.921436   15521 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:22.923320   15521 out.go:177] * Starting "addons-823099" primary control-plane node in "addons-823099" cluster
	I0923 23:38:22.925095   15521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:22.925153   15521 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:38:22.925164   15521 cache.go:56] Caching tarball of preloaded images
	I0923 23:38:22.925267   15521 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 23:38:22.925281   15521 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 23:38:22.925621   15521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json ...
	I0923 23:38:22.925656   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json: {Name:mk1d938d4754f5dff88f0edaafe7f2a9698c52bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:22.925841   15521 start.go:360] acquireMachinesLock for addons-823099: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 23:38:22.925907   15521 start.go:364] duration metric: took 50.085µs to acquireMachinesLock for "addons-823099"
	I0923 23:38:22.926043   15521 start.go:93] Provisioning new machine with config: &{Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:38:22.926135   15521 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 23:38:22.928519   15521 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 23:38:22.928694   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:38:22.928738   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:38:22.943674   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0923 23:38:22.944239   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:38:22.944884   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:38:22.944906   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:38:22.945372   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:38:22.945633   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:22.945846   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:22.946076   15521 start.go:159] libmachine.API.Create for "addons-823099" (driver="kvm2")
	I0923 23:38:22.946111   15521 client.go:168] LocalClient.Create starting
	I0923 23:38:22.946149   15521 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0923 23:38:23.071878   15521 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0923 23:38:23.150247   15521 main.go:141] libmachine: Running pre-create checks...
	I0923 23:38:23.150273   15521 main.go:141] libmachine: (addons-823099) Calling .PreCreateCheck
	I0923 23:38:23.150796   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:23.151207   15521 main.go:141] libmachine: Creating machine...
	I0923 23:38:23.151222   15521 main.go:141] libmachine: (addons-823099) Calling .Create
	I0923 23:38:23.151379   15521 main.go:141] libmachine: (addons-823099) Creating KVM machine...
	I0923 23:38:23.152659   15521 main.go:141] libmachine: (addons-823099) DBG | found existing default KVM network
	I0923 23:38:23.153379   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.153219   15543 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0923 23:38:23.153400   15521 main.go:141] libmachine: (addons-823099) DBG | created network xml: 
	I0923 23:38:23.153412   15521 main.go:141] libmachine: (addons-823099) DBG | <network>
	I0923 23:38:23.153420   15521 main.go:141] libmachine: (addons-823099) DBG |   <name>mk-addons-823099</name>
	I0923 23:38:23.153428   15521 main.go:141] libmachine: (addons-823099) DBG |   <dns enable='no'/>
	I0923 23:38:23.153434   15521 main.go:141] libmachine: (addons-823099) DBG |   
	I0923 23:38:23.153445   15521 main.go:141] libmachine: (addons-823099) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 23:38:23.153455   15521 main.go:141] libmachine: (addons-823099) DBG |     <dhcp>
	I0923 23:38:23.153464   15521 main.go:141] libmachine: (addons-823099) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 23:38:23.153470   15521 main.go:141] libmachine: (addons-823099) DBG |     </dhcp>
	I0923 23:38:23.153485   15521 main.go:141] libmachine: (addons-823099) DBG |   </ip>
	I0923 23:38:23.153497   15521 main.go:141] libmachine: (addons-823099) DBG |   
	I0923 23:38:23.153527   15521 main.go:141] libmachine: (addons-823099) DBG | </network>
	I0923 23:38:23.153541   15521 main.go:141] libmachine: (addons-823099) DBG | 
	I0923 23:38:23.159364   15521 main.go:141] libmachine: (addons-823099) DBG | trying to create private KVM network mk-addons-823099 192.168.39.0/24...
	I0923 23:38:23.227848   15521 main.go:141] libmachine: (addons-823099) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 ...
	I0923 23:38:23.227898   15521 main.go:141] libmachine: (addons-823099) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0923 23:38:23.227909   15521 main.go:141] libmachine: (addons-823099) DBG | private KVM network mk-addons-823099 192.168.39.0/24 created
	I0923 23:38:23.227930   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.227792   15543 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:23.227962   15521 main.go:141] libmachine: (addons-823099) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0923 23:38:23.481605   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.481476   15543 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa...
	I0923 23:38:23.632238   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.632114   15543 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/addons-823099.rawdisk...
	I0923 23:38:23.632260   15521 main.go:141] libmachine: (addons-823099) DBG | Writing magic tar header
	I0923 23:38:23.632269   15521 main.go:141] libmachine: (addons-823099) DBG | Writing SSH key tar header
	I0923 23:38:23.632282   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.632226   15543 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 ...
	I0923 23:38:23.632439   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099
	I0923 23:38:23.632473   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 (perms=drwx------)
	I0923 23:38:23.632484   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0923 23:38:23.632491   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0923 23:38:23.632497   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:23.632507   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0923 23:38:23.632513   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 23:38:23.632518   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0923 23:38:23.632528   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0923 23:38:23.632536   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 23:38:23.632546   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 23:38:23.632550   15521 main.go:141] libmachine: (addons-823099) Creating domain...
	I0923 23:38:23.632558   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins
	I0923 23:38:23.632570   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home
	I0923 23:38:23.632578   15521 main.go:141] libmachine: (addons-823099) DBG | Skipping /home - not owner
	I0923 23:38:23.633510   15521 main.go:141] libmachine: (addons-823099) define libvirt domain using xml: 
	I0923 23:38:23.633532   15521 main.go:141] libmachine: (addons-823099) <domain type='kvm'>
	I0923 23:38:23.633543   15521 main.go:141] libmachine: (addons-823099)   <name>addons-823099</name>
	I0923 23:38:23.633550   15521 main.go:141] libmachine: (addons-823099)   <memory unit='MiB'>4000</memory>
	I0923 23:38:23.633564   15521 main.go:141] libmachine: (addons-823099)   <vcpu>2</vcpu>
	I0923 23:38:23.633572   15521 main.go:141] libmachine: (addons-823099)   <features>
	I0923 23:38:23.633596   15521 main.go:141] libmachine: (addons-823099)     <acpi/>
	I0923 23:38:23.633612   15521 main.go:141] libmachine: (addons-823099)     <apic/>
	I0923 23:38:23.633621   15521 main.go:141] libmachine: (addons-823099)     <pae/>
	I0923 23:38:23.633628   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.633638   15521 main.go:141] libmachine: (addons-823099)   </features>
	I0923 23:38:23.633646   15521 main.go:141] libmachine: (addons-823099)   <cpu mode='host-passthrough'>
	I0923 23:38:23.633653   15521 main.go:141] libmachine: (addons-823099)   
	I0923 23:38:23.633673   15521 main.go:141] libmachine: (addons-823099)   </cpu>
	I0923 23:38:23.633707   15521 main.go:141] libmachine: (addons-823099)   <os>
	I0923 23:38:23.633725   15521 main.go:141] libmachine: (addons-823099)     <type>hvm</type>
	I0923 23:38:23.633734   15521 main.go:141] libmachine: (addons-823099)     <boot dev='cdrom'/>
	I0923 23:38:23.633739   15521 main.go:141] libmachine: (addons-823099)     <boot dev='hd'/>
	I0923 23:38:23.633745   15521 main.go:141] libmachine: (addons-823099)     <bootmenu enable='no'/>
	I0923 23:38:23.633750   15521 main.go:141] libmachine: (addons-823099)   </os>
	I0923 23:38:23.633764   15521 main.go:141] libmachine: (addons-823099)   <devices>
	I0923 23:38:23.633771   15521 main.go:141] libmachine: (addons-823099)     <disk type='file' device='cdrom'>
	I0923 23:38:23.633779   15521 main.go:141] libmachine: (addons-823099)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/boot2docker.iso'/>
	I0923 23:38:23.633784   15521 main.go:141] libmachine: (addons-823099)       <target dev='hdc' bus='scsi'/>
	I0923 23:38:23.633791   15521 main.go:141] libmachine: (addons-823099)       <readonly/>
	I0923 23:38:23.633799   15521 main.go:141] libmachine: (addons-823099)     </disk>
	I0923 23:38:23.633811   15521 main.go:141] libmachine: (addons-823099)     <disk type='file' device='disk'>
	I0923 23:38:23.633821   15521 main.go:141] libmachine: (addons-823099)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 23:38:23.633829   15521 main.go:141] libmachine: (addons-823099)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/addons-823099.rawdisk'/>
	I0923 23:38:23.633836   15521 main.go:141] libmachine: (addons-823099)       <target dev='hda' bus='virtio'/>
	I0923 23:38:23.633841   15521 main.go:141] libmachine: (addons-823099)     </disk>
	I0923 23:38:23.633848   15521 main.go:141] libmachine: (addons-823099)     <interface type='network'>
	I0923 23:38:23.633854   15521 main.go:141] libmachine: (addons-823099)       <source network='mk-addons-823099'/>
	I0923 23:38:23.633860   15521 main.go:141] libmachine: (addons-823099)       <model type='virtio'/>
	I0923 23:38:23.633865   15521 main.go:141] libmachine: (addons-823099)     </interface>
	I0923 23:38:23.633870   15521 main.go:141] libmachine: (addons-823099)     <interface type='network'>
	I0923 23:38:23.633885   15521 main.go:141] libmachine: (addons-823099)       <source network='default'/>
	I0923 23:38:23.633892   15521 main.go:141] libmachine: (addons-823099)       <model type='virtio'/>
	I0923 23:38:23.633904   15521 main.go:141] libmachine: (addons-823099)     </interface>
	I0923 23:38:23.633919   15521 main.go:141] libmachine: (addons-823099)     <serial type='pty'>
	I0923 23:38:23.633928   15521 main.go:141] libmachine: (addons-823099)       <target port='0'/>
	I0923 23:38:23.633938   15521 main.go:141] libmachine: (addons-823099)     </serial>
	I0923 23:38:23.633945   15521 main.go:141] libmachine: (addons-823099)     <console type='pty'>
	I0923 23:38:23.633957   15521 main.go:141] libmachine: (addons-823099)       <target type='serial' port='0'/>
	I0923 23:38:23.633964   15521 main.go:141] libmachine: (addons-823099)     </console>
	I0923 23:38:23.633975   15521 main.go:141] libmachine: (addons-823099)     <rng model='virtio'>
	I0923 23:38:23.633986   15521 main.go:141] libmachine: (addons-823099)       <backend model='random'>/dev/random</backend>
	I0923 23:38:23.633996   15521 main.go:141] libmachine: (addons-823099)     </rng>
	I0923 23:38:23.634010   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.634040   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.634058   15521 main.go:141] libmachine: (addons-823099)   </devices>
	I0923 23:38:23.634064   15521 main.go:141] libmachine: (addons-823099) </domain>
	I0923 23:38:23.634068   15521 main.go:141] libmachine: (addons-823099) 
	I0923 23:38:23.640809   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:76:74:e7 in network default
	I0923 23:38:23.641513   15521 main.go:141] libmachine: (addons-823099) Ensuring networks are active...
	I0923 23:38:23.641533   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:23.642154   15521 main.go:141] libmachine: (addons-823099) Ensuring network default is active
	I0923 23:38:23.642583   15521 main.go:141] libmachine: (addons-823099) Ensuring network mk-addons-823099 is active
	I0923 23:38:23.643027   15521 main.go:141] libmachine: (addons-823099) Getting domain xml...
	I0923 23:38:23.643677   15521 main.go:141] libmachine: (addons-823099) Creating domain...
	I0923 23:38:25.091232   15521 main.go:141] libmachine: (addons-823099) Waiting to get IP...
	I0923 23:38:25.092030   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.092547   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.092567   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.092528   15543 retry.go:31] will retry after 241.454266ms: waiting for machine to come up
	I0923 23:38:25.337249   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.337719   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.337739   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.337668   15543 retry.go:31] will retry after 317.338732ms: waiting for machine to come up
	I0923 23:38:25.656076   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.656565   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.656591   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.656511   15543 retry.go:31] will retry after 326.274636ms: waiting for machine to come up
	I0923 23:38:25.984000   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.984436   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.984458   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.984397   15543 retry.go:31] will retry after 437.832088ms: waiting for machine to come up
	I0923 23:38:26.424106   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:26.424634   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:26.424656   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:26.424551   15543 retry.go:31] will retry after 668.976748ms: waiting for machine to come up
	I0923 23:38:27.095408   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:27.095943   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:27.095968   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:27.095910   15543 retry.go:31] will retry after 748.393255ms: waiting for machine to come up
	I0923 23:38:27.845915   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:27.846277   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:27.846348   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:27.846252   15543 retry.go:31] will retry after 761.156246ms: waiting for machine to come up
	I0923 23:38:28.608811   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:28.609268   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:28.609298   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:28.609221   15543 retry.go:31] will retry after 1.011775453s: waiting for machine to come up
	I0923 23:38:29.622384   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:29.622840   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:29.622873   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:29.622758   15543 retry.go:31] will retry after 1.842457552s: waiting for machine to come up
	I0923 23:38:31.467098   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:31.467569   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:31.467589   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:31.467500   15543 retry.go:31] will retry after 1.843110258s: waiting for machine to come up
	I0923 23:38:33.312780   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:33.313247   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:33.313274   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:33.313210   15543 retry.go:31] will retry after 1.888655031s: waiting for machine to come up
	I0923 23:38:35.204154   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:35.204555   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:35.204580   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:35.204514   15543 retry.go:31] will retry after 2.870740222s: waiting for machine to come up
	I0923 23:38:38.077027   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:38.077558   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:38.077587   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:38.077506   15543 retry.go:31] will retry after 3.119042526s: waiting for machine to come up
	I0923 23:38:41.200776   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:41.201175   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:41.201216   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:41.201127   15543 retry.go:31] will retry after 3.936049816s: waiting for machine to come up
	I0923 23:38:45.138385   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.138867   15521 main.go:141] libmachine: (addons-823099) Found IP for machine: 192.168.39.29
	I0923 23:38:45.138888   15521 main.go:141] libmachine: (addons-823099) Reserving static IP address...
	I0923 23:38:45.138902   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has current primary IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.139282   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find host DHCP lease matching {name: "addons-823099", mac: "52:54:00:15:a7:77", ip: "192.168.39.29"} in network mk-addons-823099
	I0923 23:38:45.213621   15521 main.go:141] libmachine: (addons-823099) Reserved static IP address: 192.168.39.29
	I0923 23:38:45.213668   15521 main.go:141] libmachine: (addons-823099) DBG | Getting to WaitForSSH function...
	I0923 23:38:45.213678   15521 main.go:141] libmachine: (addons-823099) Waiting for SSH to be available...
	I0923 23:38:45.215779   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.216179   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.216202   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.216401   15521 main.go:141] libmachine: (addons-823099) DBG | Using SSH client type: external
	I0923 23:38:45.216423   15521 main.go:141] libmachine: (addons-823099) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa (-rw-------)
	I0923 23:38:45.216459   15521 main.go:141] libmachine: (addons-823099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 23:38:45.216477   15521 main.go:141] libmachine: (addons-823099) DBG | About to run SSH command:
	I0923 23:38:45.216493   15521 main.go:141] libmachine: (addons-823099) DBG | exit 0
	I0923 23:38:45.348718   15521 main.go:141] libmachine: (addons-823099) DBG | SSH cmd err, output: <nil>: 
	I0923 23:38:45.349048   15521 main.go:141] libmachine: (addons-823099) KVM machine creation complete!
	I0923 23:38:45.349355   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:45.350006   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:45.350193   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:45.350362   15521 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 23:38:45.350380   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:38:45.351912   15521 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 23:38:45.351931   15521 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 23:38:45.351940   15521 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 23:38:45.351949   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.354650   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.355037   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.355057   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.355224   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.355434   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.355578   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.355729   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.355866   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.356038   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.356049   15521 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 23:38:45.463579   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:38:45.463613   15521 main.go:141] libmachine: Detecting the provisioner...
	I0923 23:38:45.463626   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.466205   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.466613   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.466660   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.466829   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.466991   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.467178   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.467465   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.467645   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.467822   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.467833   15521 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 23:38:45.576852   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 23:38:45.576941   15521 main.go:141] libmachine: found compatible host: buildroot
	I0923 23:38:45.576956   15521 main.go:141] libmachine: Provisioning with buildroot...
	I0923 23:38:45.576964   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.577226   15521 buildroot.go:166] provisioning hostname "addons-823099"
	I0923 23:38:45.577248   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.577399   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.579859   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.580371   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.580404   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.580552   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.580721   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.580878   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.581030   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.581194   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.581377   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.581388   15521 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-823099 && echo "addons-823099" | sudo tee /etc/hostname
	I0923 23:38:45.702788   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-823099
	
	I0923 23:38:45.702814   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.706046   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.706466   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.706498   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.706674   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.706841   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.706992   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.707098   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.707259   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.707426   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.707442   15521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-823099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-823099/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-823099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 23:38:45.824404   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:38:45.824467   15521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0923 23:38:45.824483   15521 buildroot.go:174] setting up certificates
	I0923 23:38:45.824492   15521 provision.go:84] configureAuth start
	I0923 23:38:45.824500   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.824784   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:45.827604   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.827981   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.828003   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.828166   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.830661   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.831054   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.831074   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.831227   15521 provision.go:143] copyHostCerts
	I0923 23:38:45.831320   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0923 23:38:45.831457   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0923 23:38:45.831538   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0923 23:38:45.831629   15521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.addons-823099 san=[127.0.0.1 192.168.39.29 addons-823099 localhost minikube]
	I0923 23:38:45.920692   15521 provision.go:177] copyRemoteCerts
	I0923 23:38:45.920769   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 23:38:45.920791   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.923583   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.923986   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.924002   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.924356   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.924566   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.924832   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.924985   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.010588   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 23:38:46.034096   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 23:38:46.056758   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 23:38:46.081040   15521 provision.go:87] duration metric: took 256.535012ms to configureAuth
	I0923 23:38:46.081074   15521 buildroot.go:189] setting minikube options for container-runtime
	I0923 23:38:46.081315   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:38:46.081416   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.084885   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.085669   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.085696   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.086110   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.086464   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.086680   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.086852   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.087064   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:46.087258   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:46.087278   15521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 23:38:46.317743   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 23:38:46.317769   15521 main.go:141] libmachine: Checking connection to Docker...
	I0923 23:38:46.317777   15521 main.go:141] libmachine: (addons-823099) Calling .GetURL
	I0923 23:38:46.319030   15521 main.go:141] libmachine: (addons-823099) DBG | Using libvirt version 6000000
	I0923 23:38:46.321409   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.321779   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.321804   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.321996   15521 main.go:141] libmachine: Docker is up and running!
	I0923 23:38:46.322104   15521 main.go:141] libmachine: Reticulating splines...
	I0923 23:38:46.322116   15521 client.go:171] duration metric: took 23.37599828s to LocalClient.Create
	I0923 23:38:46.322150   15521 start.go:167] duration metric: took 23.376076398s to libmachine.API.Create "addons-823099"
	I0923 23:38:46.322166   15521 start.go:293] postStartSetup for "addons-823099" (driver="kvm2")
	I0923 23:38:46.322180   15521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:38:46.322208   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.322508   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:38:46.322578   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.324896   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.325318   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.325337   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.325528   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.325723   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.325872   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.326059   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.410536   15521 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 23:38:46.414783   15521 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 23:38:46.414821   15521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0923 23:38:46.414912   15521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0923 23:38:46.414938   15521 start.go:296] duration metric: took 92.765547ms for postStartSetup
	I0923 23:38:46.414968   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:46.415530   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:46.418325   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.418685   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.418723   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.418908   15521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json ...
	I0923 23:38:46.419089   15521 start.go:128] duration metric: took 23.492942575s to createHost
	I0923 23:38:46.419111   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.421225   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.421516   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.421547   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.421645   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.421824   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.421967   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.422177   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.422321   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:46.422531   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:46.422544   15521 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 23:38:46.533050   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727134726.509696447
	
	I0923 23:38:46.533076   15521 fix.go:216] guest clock: 1727134726.509696447
	I0923 23:38:46.533086   15521 fix.go:229] Guest: 2024-09-23 23:38:46.509696447 +0000 UTC Remote: 2024-09-23 23:38:46.419100225 +0000 UTC m=+23.595027380 (delta=90.596222ms)
	I0923 23:38:46.533110   15521 fix.go:200] guest clock delta is within tolerance: 90.596222ms
	I0923 23:38:46.533117   15521 start.go:83] releasing machines lock for "addons-823099", held for 23.607112252s
	I0923 23:38:46.533143   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.533469   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:46.535967   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.536214   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.536242   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.536438   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.536933   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.537122   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.537236   15521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 23:38:46.537290   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.537326   15521 ssh_runner.go:195] Run: cat /version.json
	I0923 23:38:46.537344   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.540050   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540313   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540468   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.540495   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540659   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.540748   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.540775   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540846   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.540921   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.540970   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.541076   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.541111   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.541201   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.541342   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.662512   15521 ssh_runner.go:195] Run: systemctl --version
	I0923 23:38:46.668932   15521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 23:38:46.827889   15521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 23:38:46.833604   15521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 23:38:46.833746   15521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:38:46.850062   15521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 23:38:46.850089   15521 start.go:495] detecting cgroup driver to use...
	I0923 23:38:46.850148   15521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 23:38:46.867425   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 23:38:46.882361   15521 docker.go:217] disabling cri-docker service (if available) ...
	I0923 23:38:46.882419   15521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 23:38:46.897323   15521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 23:38:46.911805   15521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 23:38:47.036999   15521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 23:38:47.203688   15521 docker.go:233] disabling docker service ...
	I0923 23:38:47.203767   15521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 23:38:47.219064   15521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 23:38:47.231715   15521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 23:38:47.365365   15521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 23:38:47.495284   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 23:38:47.508723   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:38:47.526801   15521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 23:38:47.526867   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.536943   15521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 23:38:47.537001   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.547198   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.557182   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.567529   15521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:38:47.578959   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.589877   15521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.608254   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.618495   15521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:38:47.627787   15521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 23:38:47.627862   15521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 23:38:47.640795   15521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:38:47.650160   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:47.773450   15521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 23:38:47.870212   15521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 23:38:47.870328   15521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 23:38:47.875329   15521 start.go:563] Will wait 60s for crictl version
	I0923 23:38:47.875422   15521 ssh_runner.go:195] Run: which crictl
	I0923 23:38:47.879286   15521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 23:38:47.916386   15521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 23:38:47.916536   15521 ssh_runner.go:195] Run: crio --version
	I0923 23:38:47.943232   15521 ssh_runner.go:195] Run: crio --version
	I0923 23:38:47.973111   15521 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 23:38:47.974418   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:47.977389   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:47.977726   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:47.977771   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:47.977950   15521 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 23:38:47.982681   15521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:47.995735   15521 kubeadm.go:883] updating cluster {Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:38:47.995872   15521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:47.995937   15521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:38:48.026187   15521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 23:38:48.026255   15521 ssh_runner.go:195] Run: which lz4
	I0923 23:38:48.029934   15521 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 23:38:48.033681   15521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 23:38:48.033709   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 23:38:49.244831   15521 crio.go:462] duration metric: took 1.21491674s to copy over tarball
	I0923 23:38:49.244910   15521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 23:38:51.408420   15521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.163482226s)
	I0923 23:38:51.408450   15521 crio.go:469] duration metric: took 2.163580195s to extract the tarball
	I0923 23:38:51.408457   15521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 23:38:51.445104   15521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:38:51.484376   15521 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 23:38:51.484401   15521 cache_images.go:84] Images are preloaded, skipping loading
	I0923 23:38:51.484409   15521 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.31.1 crio true true} ...
	I0923 23:38:51.484499   15521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-823099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 23:38:51.484557   15521 ssh_runner.go:195] Run: crio config
	I0923 23:38:51.538806   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:38:51.538828   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:38:51.538838   15521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:38:51.538859   15521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-823099 NodeName:addons-823099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:38:51.538985   15521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-823099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:38:51.539038   15521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:38:51.548496   15521 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 23:38:51.548563   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 23:38:51.557551   15521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 23:38:51.574810   15521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:38:51.590461   15521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0923 23:38:51.605904   15521 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0923 23:38:51.609379   15521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:51.620067   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:51.746991   15521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:38:51.764430   15521 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099 for IP: 192.168.39.29
	I0923 23:38:51.764452   15521 certs.go:194] generating shared ca certs ...
	I0923 23:38:51.764479   15521 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.764627   15521 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0923 23:38:51.827925   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt ...
	I0923 23:38:51.827961   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt: {Name:mk7bce46408bad28fa4c4ad82afe9d6bd10e26b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.828169   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key ...
	I0923 23:38:51.828185   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key: {Name:mkfd724d8b1e5c4e28f581332eb148d4cdbcd3bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.828303   15521 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0923 23:38:51.937978   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt ...
	I0923 23:38:51.938011   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt: {Name:mka59daefa132c631d082c68c6d4bee6c31dbed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.938201   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key ...
	I0923 23:38:51.938214   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key: {Name:mk74fd28ca9ebe05bacfd634b928864a1a7ce292 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.938314   15521 certs.go:256] generating profile certs ...
	I0923 23:38:51.938367   15521 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key
	I0923 23:38:51.938381   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt with IP's: []
	I0923 23:38:52.195361   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt ...
	I0923 23:38:52.195393   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: {Name:mkf53b392cc89a16e12244564032d9b45154080d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.195578   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key ...
	I0923 23:38:52.195591   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key: {Name:mk9b41db6a73a405e689e669580e343c2766a447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.195711   15521 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9
	I0923 23:38:52.195731   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.29]
	I0923 23:38:52.295200   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 ...
	I0923 23:38:52.295231   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9: {Name:mkae17567f7ac3bcae8f339aebdd9969213784de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.295413   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9 ...
	I0923 23:38:52.295433   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9: {Name:mk496cd6f593f9c72852d6a78b567d84d704b066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.295528   15521 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt
	I0923 23:38:52.295617   15521 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key
	I0923 23:38:52.295677   15521 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key
	I0923 23:38:52.295695   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt with IP's: []
	I0923 23:38:52.353357   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt ...
	I0923 23:38:52.353388   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt: {Name:mke38bbbfeef7cd2c66dad6779df3ba32d8b0e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.353569   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key ...
	I0923 23:38:52.353582   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key: {Name:mka62603d541b89ee9d7c4fc26d23c4522e47be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.353765   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 23:38:52.353806   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0923 23:38:52.353833   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:38:52.353855   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0923 23:38:52.354427   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:38:52.379337   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 23:38:52.400882   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:38:52.424525   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 23:38:52.450323   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 23:38:52.477687   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 23:38:52.499751   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:38:52.521727   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 23:38:52.543557   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:38:52.565278   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:38:52.581109   15521 ssh_runner.go:195] Run: openssl version
	I0923 23:38:52.586569   15521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:38:52.596572   15521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.600599   15521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.600654   15521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.606001   15521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:38:52.615760   15521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:38:52.619451   15521 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:38:52.619508   15521 kubeadm.go:392] StartCluster: {Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:52.619583   15521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 23:38:52.620006   15521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 23:38:52.654320   15521 cri.go:89] found id: ""
	I0923 23:38:52.654386   15521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:38:52.663817   15521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:38:52.673074   15521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:38:52.681948   15521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:38:52.681974   15521 kubeadm.go:157] found existing configuration files:
	
	I0923 23:38:52.682026   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:38:52.690360   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:38:52.690418   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:38:52.698969   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:38:52.707269   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:38:52.707357   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:38:52.716380   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:38:52.725235   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:38:52.725319   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:38:52.734575   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:38:52.743504   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:38:52.743572   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:38:52.752994   15521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 23:38:52.803786   15521 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:38:52.803907   15521 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:38:52.902853   15521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:38:52.903001   15521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:38:52.903126   15521 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:38:52.909824   15521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:38:52.911676   15521 out.go:235]   - Generating certificates and keys ...
	I0923 23:38:52.912753   15521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:38:52.912873   15521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:38:53.248886   15521 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:38:53.341826   15521 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:38:53.485454   15521 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:38:53.623967   15521 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:38:53.679532   15521 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:38:53.679721   15521 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-823099 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0923 23:38:53.905840   15521 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 23:38:53.906024   15521 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-823099 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0923 23:38:54.051813   15521 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 23:38:54.395310   15521 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 23:38:54.735052   15521 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 23:38:54.735299   15521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 23:38:54.847419   15521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 23:38:54.936586   15521 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 23:38:55.060632   15521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 23:38:55.214060   15521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 23:38:55.303678   15521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 23:38:55.304286   15521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 23:38:55.306790   15521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 23:38:55.308801   15521 out.go:235]   - Booting up control plane ...
	I0923 23:38:55.308940   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 23:38:55.309057   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 23:38:55.309138   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 23:38:55.324842   15521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 23:38:55.330701   15521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 23:38:55.330768   15521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 23:38:55.470043   15521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 23:38:55.470158   15521 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 23:38:56.470778   15521 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001582152s
	I0923 23:38:56.470872   15521 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 23:39:01.969265   15521 kubeadm.go:310] [api-check] The API server is healthy after 5.501475075s
	I0923 23:39:01.981867   15521 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 23:39:02.004452   15521 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 23:39:02.039983   15521 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 23:39:02.040235   15521 kubeadm.go:310] [mark-control-plane] Marking the node addons-823099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 23:39:02.057479   15521 kubeadm.go:310] [bootstrap-token] Using token: fyz7kl.eyjwn42xmcr354pj
	I0923 23:39:02.059006   15521 out.go:235]   - Configuring RBAC rules ...
	I0923 23:39:02.059157   15521 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 23:39:02.076960   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 23:39:02.086257   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 23:39:02.092000   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 23:39:02.096548   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 23:39:02.102638   15521 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 23:39:02.377281   15521 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 23:39:02.807346   15521 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 23:39:03.376529   15521 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 23:39:03.377848   15521 kubeadm.go:310] 
	I0923 23:39:03.377926   15521 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 23:39:03.377937   15521 kubeadm.go:310] 
	I0923 23:39:03.378021   15521 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 23:39:03.378030   15521 kubeadm.go:310] 
	I0923 23:39:03.378058   15521 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 23:39:03.378126   15521 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 23:39:03.378208   15521 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 23:39:03.378228   15521 kubeadm.go:310] 
	I0923 23:39:03.378321   15521 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 23:39:03.378330   15521 kubeadm.go:310] 
	I0923 23:39:03.378390   15521 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 23:39:03.378400   15521 kubeadm.go:310] 
	I0923 23:39:03.378499   15521 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 23:39:03.378600   15521 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 23:39:03.378669   15521 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 23:39:03.378680   15521 kubeadm.go:310] 
	I0923 23:39:03.378788   15521 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 23:39:03.378897   15521 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 23:39:03.378907   15521 kubeadm.go:310] 
	I0923 23:39:03.378995   15521 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fyz7kl.eyjwn42xmcr354pj \
	I0923 23:39:03.379107   15521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0923 23:39:03.379129   15521 kubeadm.go:310] 	--control-plane 
	I0923 23:39:03.379133   15521 kubeadm.go:310] 
	I0923 23:39:03.379245   15521 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 23:39:03.379266   15521 kubeadm.go:310] 
	I0923 23:39:03.379389   15521 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fyz7kl.eyjwn42xmcr354pj \
	I0923 23:39:03.379523   15521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0923 23:39:03.380043   15521 kubeadm.go:310] W0923 23:38:52.785015     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:39:03.380394   15521 kubeadm.go:310] W0923 23:38:52.785716     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:39:03.380489   15521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 23:39:03.380508   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:39:03.380560   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:39:03.383452   15521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 23:39:03.384682   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 23:39:03.397094   15521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 23:39:03.417722   15521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 23:39:03.417811   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:03.417847   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-823099 minikube.k8s.io/updated_at=2024_09_23T23_39_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=addons-823099 minikube.k8s.io/primary=true
	I0923 23:39:03.459069   15521 ops.go:34] apiserver oom_adj: -16
	I0923 23:39:03.574741   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.075852   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.575549   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.075536   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.574791   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.075455   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.575226   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.075498   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.575490   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.682573   15521 kubeadm.go:1113] duration metric: took 4.264822927s to wait for elevateKubeSystemPrivileges
	I0923 23:39:07.682604   15521 kubeadm.go:394] duration metric: took 15.063102314s to StartCluster
	I0923 23:39:07.682621   15521 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:07.682743   15521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:39:07.683441   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:07.683700   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 23:39:07.683729   15521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:39:07.683777   15521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 23:39:07.683896   15521 addons.go:69] Setting yakd=true in profile "addons-823099"
	I0923 23:39:07.683906   15521 addons.go:69] Setting default-storageclass=true in profile "addons-823099"
	I0923 23:39:07.683910   15521 addons.go:69] Setting cloud-spanner=true in profile "addons-823099"
	I0923 23:39:07.683926   15521 addons.go:69] Setting registry=true in profile "addons-823099"
	I0923 23:39:07.683932   15521 addons.go:234] Setting addon cloud-spanner=true in "addons-823099"
	I0923 23:39:07.683939   15521 addons.go:234] Setting addon registry=true in "addons-823099"
	I0923 23:39:07.683937   15521 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-823099"
	I0923 23:39:07.683936   15521 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-823099"
	I0923 23:39:07.683953   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:39:07.683968   15521 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-823099"
	I0923 23:39:07.683981   15521 addons.go:69] Setting storage-provisioner=true in profile "addons-823099"
	I0923 23:39:07.683982   15521 addons.go:69] Setting ingress=true in profile "addons-823099"
	I0923 23:39:07.683983   15521 addons.go:69] Setting gcp-auth=true in profile "addons-823099"
	I0923 23:39:07.683992   15521 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-823099"
	I0923 23:39:07.684000   15521 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-823099"
	I0923 23:39:07.684003   15521 addons.go:69] Setting inspektor-gadget=true in profile "addons-823099"
	I0923 23:39:07.684006   15521 addons.go:69] Setting volcano=true in profile "addons-823099"
	I0923 23:39:07.684009   15521 mustload.go:65] Loading cluster: addons-823099
	I0923 23:39:07.684014   15521 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-823099"
	I0923 23:39:07.683928   15521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-823099"
	I0923 23:39:07.683992   15521 addons.go:234] Setting addon storage-provisioner=true in "addons-823099"
	I0923 23:39:07.684124   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684015   15521 addons.go:234] Setting addon inspektor-gadget=true in "addons-823099"
	I0923 23:39:07.684199   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684214   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:39:07.683970   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684535   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684572   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684595   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684622   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684005   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.683959   15521 addons.go:69] Setting ingress-dns=true in profile "addons-823099"
	I0923 23:39:07.684654   15521 addons.go:234] Setting addon ingress-dns=true in "addons-823099"
	I0923 23:39:07.684657   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684690   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684716   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684747   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.683970   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.683918   15521 addons.go:234] Setting addon yakd=true in "addons-823099"
	I0923 23:39:07.684807   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.685044   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685062   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685073   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685093   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685134   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.683995   15521 addons.go:234] Setting addon ingress=true in "addons-823099"
	I0923 23:39:07.685161   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685181   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684602   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684017   15521 addons.go:69] Setting metrics-server=true in profile "addons-823099"
	I0923 23:39:07.685232   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685242   15521 addons.go:234] Setting addon metrics-server=true in "addons-823099"
	I0923 23:39:07.685264   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684024   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.685615   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685642   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685796   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685854   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685977   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.686016   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684027   15521 addons.go:69] Setting volumesnapshots=true in profile "addons-823099"
	I0923 23:39:07.686371   15521 addons.go:234] Setting addon volumesnapshots=true in "addons-823099"
	I0923 23:39:07.686398   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684018   15521 addons.go:234] Setting addon volcano=true in "addons-823099"
	I0923 23:39:07.686673   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.686498   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.686778   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684634   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.687024   15521 out.go:177] * Verifying Kubernetes components...
	I0923 23:39:07.688506   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:39:07.703440   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0923 23:39:07.705810   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0923 23:39:07.708733   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.708779   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.709090   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0923 23:39:07.709229   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.709266   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.709595   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.709629   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.713224   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713355   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713390   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713862   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.713881   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.714302   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.714377   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.714392   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.714451   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.714464   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.715015   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.715037   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.715432   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.715475   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.715787   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.716507   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.719153   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.719542   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.719578   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.720949   15521 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-823099"
	I0923 23:39:07.720998   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.721386   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.721432   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.735627   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0923 23:39:07.736277   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.736638   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0923 23:39:07.737105   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.737122   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.737510   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.738081   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.738098   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.738156   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0923 23:39:07.739268   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.739318   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.739918   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.739959   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.740211   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.740321   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45651
	I0923 23:39:07.740861   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.740881   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.740953   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.740993   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.741352   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.741901   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.741947   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.742154   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.742613   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.742628   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.743023   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.743085   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0923 23:39:07.743569   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.743610   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.746643   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.747874   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.747903   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.748324   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.748466   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0923 23:39:07.748965   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.749004   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.749096   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.749726   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.749746   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.750196   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.750719   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.750754   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.758701   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0923 23:39:07.759243   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.759784   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.759805   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.760206   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.760261   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0923 23:39:07.761129   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.761175   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.761441   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33217
	I0923 23:39:07.761985   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.762828   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.762847   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.763324   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.763665   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.765500   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0923 23:39:07.765573   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.766125   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.766145   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.766801   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.766864   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0923 23:39:07.767084   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.767500   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.768285   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.768301   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.768446   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.768843   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.768866   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.768932   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.769275   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.769821   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.769867   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.771258   15521 addons.go:234] Setting addon default-storageclass=true in "addons-823099"
	I0923 23:39:07.771300   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.771655   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.771687   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.771922   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0923 23:39:07.772228   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.772255   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.774448   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0923 23:39:07.780977   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.781565   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.781590   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.781920   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.782058   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.783913   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.785056   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0923 23:39:07.785575   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0923 23:39:07.785629   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.786110   15521 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 23:39:07.786146   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.786320   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.786334   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.786772   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.787011   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.788584   15521 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 23:39:07.789007   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.789550   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.789568   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.789756   15521 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 23:39:07.789773   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 23:39:07.789788   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.790146   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.790662   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:07.791941   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:07.793680   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.793727   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.793998   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0923 23:39:07.794008   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.794031   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0923 23:39:07.794471   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.794493   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.794701   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.794875   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.794877   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 23:39:07.794982   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.795069   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.796623   15521 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:39:07.796643   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 23:39:07.796662   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.798956   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I0923 23:39:07.799731   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.800110   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.800142   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.800477   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.800553   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0923 23:39:07.801546   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801641   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801654   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801712   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.801839   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.801899   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.802076   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802095   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802220   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802235   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802360   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802376   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802425   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802551   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.802641   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38501
	I0923 23:39:07.802785   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802803   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.802788   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802965   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.803026   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.803767   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.803787   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.803854   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.804090   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.804743   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.804784   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.805129   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.805147   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.805173   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.805248   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.805498   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.805769   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.806098   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.806118   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.806136   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.806514   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.807108   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.806545   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.806630   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45039
	I0923 23:39:07.807369   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.807510   15521 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 23:39:07.808434   15521 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 23:39:07.808504   15521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 23:39:07.809038   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.809300   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:07.809332   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:07.809348   15521 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:39:07.809359   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 23:39:07.809376   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.809986   15521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:07.810003   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 23:39:07.810016   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.810062   15521 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 23:39:07.810069   15521 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 23:39:07.810078   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.811006   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:07.811042   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:07.811050   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:07.811145   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:07.811156   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:07.811952   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.812979   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.812997   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.813331   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:07.813347   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 23:39:07.813447   15521 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 23:39:07.813946   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.814117   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0923 23:39:07.814661   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.814885   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.815227   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.815248   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.815430   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.815545   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.815727   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.816076   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.816315   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.816316   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.817096   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.817135   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.817285   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.817306   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.817432   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.817458   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.817467   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.817640   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.817797   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.818443   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.818475   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.818854   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.818916   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.818935   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.819103   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.819327   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.819449   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.819556   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.819704   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.820144   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0923 23:39:07.821232   15521 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 23:39:07.822387   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 23:39:07.822407   15521 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 23:39:07.822426   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.823519   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.824593   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.824617   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.825182   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.825425   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.826173   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.826824   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.826852   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.827033   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.827202   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.827342   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.827473   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.833317   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.834549   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0923 23:39:07.834702   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I0923 23:39:07.835036   15521 out.go:177]   - Using image docker.io/busybox:stable
	I0923 23:39:07.835302   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.835304   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.835362   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0923 23:39:07.835997   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.836020   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.836421   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.836527   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.836860   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.837187   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.837204   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.837294   15521 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 23:39:07.837615   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0923 23:39:07.837726   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.838168   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.838186   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.838238   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.838278   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.838430   15521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:07.838454   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 23:39:07.838486   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.838837   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.838942   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.838956   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.839318   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.839611   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.840065   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.840126   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.840224   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.840573   15521 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 23:39:07.841432   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.841867   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 23:39:07.841976   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 23:39:07.841989   15521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 23:39:07.842007   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.843249   15521 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 23:39:07.843258   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 23:39:07.843274   15521 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 23:39:07.843293   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.843539   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.844019   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.844044   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.844276   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.844626   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.844835   15521 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:07.844851   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 23:39:07.844867   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.844970   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.845115   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.845546   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.847193   15521 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 23:39:07.847689   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848226   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848362   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.848385   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848554   15521 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:07.848568   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 23:39:07.848584   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.849188   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849248   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849270   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.849286   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849314   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849370   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.849384   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849407   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849452   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849490   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.849597   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849640   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849646   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.849718   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.849850   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.850150   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.850314   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.851886   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.852209   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.852227   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.852511   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.852685   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.852836   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.852856   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I0923 23:39:07.853005   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.853307   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.853831   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.853845   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.854157   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.854337   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.854969   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0923 23:39:07.855341   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.855829   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.855846   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.855913   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.856203   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.856410   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.857704   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 23:39:07.857995   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.858210   15521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:07.858230   15521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 23:39:07.858247   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.859971   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 23:39:07.860879   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.861260   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.861284   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.861453   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.861596   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.861697   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.861858   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.862305   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 23:39:07.863581   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 23:39:07.864972   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 23:39:07.866055   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 23:39:07.867358   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 23:39:07.868993   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 23:39:07.870321   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 23:39:07.870349   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 23:39:07.870377   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.873724   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.874117   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.874148   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.874293   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.874468   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.874636   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.874743   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	W0923 23:39:07.876787   15521 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52054->192.168.39.29:22: read: connection reset by peer
	I0923 23:39:07.876819   15521 retry.go:31] will retry after 325.765673ms: ssh: handshake failed: read tcp 192.168.39.1:52054->192.168.39.29:22: read: connection reset by peer
	I0923 23:39:08.112607   15521 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 23:39:08.112629   15521 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 23:39:08.174341   15521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:39:08.174422   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 23:39:08.189231   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:08.223406   15521 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:08.223436   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 23:39:08.238226   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 23:39:08.238253   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 23:39:08.286222   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:08.286427   15521 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 23:39:08.286456   15521 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 23:39:08.293938   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:08.304026   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:39:08.304633   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:39:08.357785   15521 node_ready.go:35] waiting up to 6m0s for node "addons-823099" to be "Ready" ...
	I0923 23:39:08.361610   15521 node_ready.go:49] node "addons-823099" has status "Ready":"True"
	I0923 23:39:08.361634   15521 node_ready.go:38] duration metric: took 3.816238ms for node "addons-823099" to be "Ready" ...
	I0923 23:39:08.361643   15521 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:08.370384   15521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:08.389666   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 23:39:08.389694   15521 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 23:39:08.393171   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 23:39:08.393188   15521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 23:39:08.414092   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:08.415846   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:08.424751   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:08.462715   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 23:39:08.462737   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 23:39:08.507754   15521 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 23:39:08.507783   15521 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 23:39:08.593622   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:08.593654   15521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 23:39:08.629405   15521 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 23:39:08.629437   15521 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 23:39:08.632087   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 23:39:08.632113   15521 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 23:39:08.661201   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 23:39:08.661224   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 23:39:08.691801   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 23:39:08.691827   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 23:39:08.714253   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:08.819060   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 23:39:08.819096   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 23:39:08.831081   15521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 23:39:08.831110   15521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 23:39:08.886522   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 23:39:08.886559   15521 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 23:39:09.009250   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 23:39:09.009293   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 23:39:09.046881   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 23:39:09.046906   15521 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 23:39:09.157084   15521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 23:39:09.157109   15521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 23:39:09.166062   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:09.166097   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 23:39:09.267085   15521 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:09.267116   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 23:39:09.292567   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 23:39:09.292607   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 23:39:09.429637   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:09.445286   15521 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 23:39:09.445326   15521 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 23:39:09.492474   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 23:39:09.492516   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 23:39:09.565613   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:09.721455   15521 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:09.721493   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 23:39:09.840988   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:09.948899   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 23:39:09.948926   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 23:39:10.140834   15521 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.966375459s)
	I0923 23:39:10.140875   15521 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 23:39:10.141396   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.952129655s)
	I0923 23:39:10.141443   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:10.142827   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:10.143945   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:10.143972   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:10.143992   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:10.144008   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:10.144020   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:10.144388   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:10.144424   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:10.144431   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:10.281273   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 23:39:10.281305   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 23:39:10.378453   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:10.646247   15521 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-823099" context rescaled to 1 replicas
	I0923 23:39:10.659756   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 23:39:10.659783   15521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 23:39:10.917202   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 23:39:10.917226   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 23:39:11.091159   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 23:39:11.091181   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 23:39:11.170283   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:11.170310   15521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 23:39:11.230097   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:12.257220   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.970955837s)
	I0923 23:39:12.257279   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.257296   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.257605   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.257667   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.257688   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.257702   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.257712   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.257950   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.257978   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.257992   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.474315   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:12.579345   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.28537252s)
	I0923 23:39:12.579401   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579415   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579416   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.275358792s)
	I0923 23:39:12.579452   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579468   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579812   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.579813   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.579872   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.579881   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579827   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.579909   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.579927   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579941   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579841   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.579889   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.580178   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.580190   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.580247   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.580256   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.580271   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.695053   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.695077   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.695384   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.695434   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.695455   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:14.801414   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 23:39:14.801451   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:14.804720   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:14.805099   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:14.805139   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:14.805316   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:14.805553   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:14.805707   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:14.805897   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:14.982300   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:15.080173   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.775510965s)
	I0923 23:39:15.080238   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080251   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080246   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.666114058s)
	I0923 23:39:15.080267   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.664395261s)
	I0923 23:39:15.080284   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080302   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080304   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080351   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.655576289s)
	I0923 23:39:15.080364   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080367   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080450   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080463   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.366181047s)
	I0923 23:39:15.080486   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080496   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080565   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.650891016s)
	I0923 23:39:15.080647   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080661   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082553   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082564   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082580   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082585   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082594   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082604   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082584   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082626   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082636   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082647   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082655   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082668   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082674   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082611   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082691   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082687   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082680   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082659   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082718   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082722   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082726   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082731   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082708   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082741   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082749   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082757   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082764   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082766   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082735   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082783   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.083277   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083295   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083312   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083337   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083354   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083406   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083372   15521 addons.go:475] Verifying addon ingress=true in "addons-823099"
	I0923 23:39:15.083518   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083528   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083746   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083771   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083777   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084317   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.084354   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.084376   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.084382   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084390   15521 addons.go:475] Verifying addon metrics-server=true in "addons-823099"
	I0923 23:39:15.084467   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.084473   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084479   15521 addons.go:475] Verifying addon registry=true in "addons-823099"
	I0923 23:39:15.085995   15521 out.go:177] * Verifying registry addon...
	I0923 23:39:15.086007   15521 out.go:177] * Verifying ingress addon...
	I0923 23:39:15.085999   15521 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-823099 service yakd-dashboard -n yakd-dashboard
	
	I0923 23:39:15.088530   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 23:39:15.088530   15521 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 23:39:15.123892   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 23:39:15.148951   15521 addons.go:234] Setting addon gcp-auth=true in "addons-823099"
	I0923 23:39:15.149022   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:15.149444   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:15.149498   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:15.156748   15521 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 23:39:15.156776   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.156871   15521 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 23:39:15.156894   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.165454   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0923 23:39:15.166065   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:15.166623   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:15.166651   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:15.167013   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:15.167737   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:15.167785   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:15.183598   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0923 23:39:15.184008   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:15.184531   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:15.184550   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:15.184913   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:15.185133   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:15.186845   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:15.187076   15521 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 23:39:15.187097   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:15.190490   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:15.190909   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:15.190948   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:15.191144   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:15.191345   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:15.191625   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:15.191841   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:15.290771   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.290792   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.291156   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.291204   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.291213   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.608008   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.608181   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.663866   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.098204567s)
	W0923 23:39:15.663915   15521 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:15.663942   15521 retry.go:31] will retry after 155.263016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:15.663943   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.822915237s)
	I0923 23:39:15.663986   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.663996   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.664271   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.664295   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.664306   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.664280   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.664315   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.664608   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.664630   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.820233   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:16.092842   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.094282   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:16.598768   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.599105   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.384250   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.386825   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.406922   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:17.409629   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.179468555s)
	I0923 23:39:17.409649   15521 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.222551947s)
	I0923 23:39:17.409675   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:17.409696   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:17.410005   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:17.410058   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:17.410074   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:17.410089   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:17.410101   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:17.410329   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:17.410346   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:17.410355   15521 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-823099"
	I0923 23:39:17.410358   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:17.411136   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:17.412024   15521 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 23:39:17.413560   15521 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 23:39:17.414261   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 23:39:17.414746   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 23:39:17.414766   15521 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 23:39:17.482533   15521 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 23:39:17.482556   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:17.512131   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 23:39:17.512159   15521 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 23:39:17.604150   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.604278   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.608747   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:17.608767   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 23:39:17.684509   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:17.918552   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.093404   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.096529   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.238589   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.418299415s)
	I0923 23:39:18.238642   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.238659   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.238975   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.238997   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.239004   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.239015   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.239024   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.239271   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.239324   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.239340   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.418978   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.601947   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.602098   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.821107   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.136556988s)
	I0923 23:39:18.821156   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.821172   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.821448   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.821469   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.821483   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.821490   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.821766   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.821781   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.821801   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.823766   15521 addons.go:475] Verifying addon gcp-auth=true in "addons-823099"
	I0923 23:39:18.825653   15521 out.go:177] * Verifying gcp-auth addon...
	I0923 23:39:18.828295   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 23:39:18.850143   15521 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 23:39:18.850163   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:18.920541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.100926   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.107040   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.336759   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:19.421467   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.593866   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.594253   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.832242   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:19.878336   15521 pod_ready.go:98] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.29 HostIPs:[{IP:192.168.39.
29}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:39:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:39:12 +0000 UTC,FinishedAt:2024-09-23 23:39:18 +0000 UTC,ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327 Started:0xc00232d080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001f318d0} {Name:kube-api-access-ph5fc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001f318e0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:19.878379   15521 pod_ready.go:82] duration metric: took 11.507967304s for pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace to be "Ready" ...
	E0923 23:39:19.878394   15521 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.29 HostIPs:[{IP:192.168.39.29}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:39:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:39:12 +0000 UTC,FinishedAt:2024-09-23 23:39:18 +0000 UTC,ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327 Started:0xc00232d080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001f318d0} {Name:kube-api-access-ph5fc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc001f318e0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:19.878408   15521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.884151   15521 pod_ready.go:93] pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.884174   15521 pod_ready.go:82] duration metric: took 5.758861ms for pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.884183   15521 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.891508   15521 pod_ready.go:93] pod "etcd-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.891551   15521 pod_ready.go:82] duration metric: took 7.346453ms for pod "etcd-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.891564   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.896566   15521 pod_ready.go:93] pod "kube-apiserver-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.896593   15521 pod_ready.go:82] duration metric: took 5.020816ms for pod "kube-apiserver-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.896609   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.912376   15521 pod_ready.go:93] pod "kube-controller-manager-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.912404   15521 pod_ready.go:82] duration metric: took 15.786797ms for pod "kube-controller-manager-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.912416   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgclm" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.923485   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.095418   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.098684   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.275218   15521 pod_ready.go:93] pod "kube-proxy-pgclm" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:20.275250   15521 pod_ready.go:82] duration metric: took 362.825273ms for pod "kube-proxy-pgclm" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.275263   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.332146   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:20.419880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.593710   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.593992   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.675652   15521 pod_ready.go:93] pod "kube-scheduler-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:20.675690   15521 pod_ready.go:82] duration metric: took 400.417501ms for pod "kube-scheduler-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.675704   15521 pod_ready.go:39] duration metric: took 12.314050106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:20.675723   15521 api_server.go:52] waiting for apiserver process to appear ...
	I0923 23:39:20.675791   15521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:20.719710   15521 api_server.go:72] duration metric: took 13.035944288s to wait for apiserver process to appear ...
	I0923 23:39:20.719738   15521 api_server.go:88] waiting for apiserver healthz status ...
	I0923 23:39:20.719761   15521 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0923 23:39:20.724996   15521 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0923 23:39:20.726609   15521 api_server.go:141] control plane version: v1.31.1
	I0923 23:39:20.726632   15521 api_server.go:131] duration metric: took 6.887893ms to wait for apiserver health ...
	I0923 23:39:20.726640   15521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 23:39:20.832687   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:20.879847   15521 system_pods.go:59] 17 kube-system pods found
	I0923 23:39:20.879881   15521 system_pods.go:61] "coredns-7c65d6cfc9-h4m6q" [e5a66fda-ace2-434e-82fb-3d9d66fac49f] Running
	I0923 23:39:20.879892   15521 system_pods.go:61] "csi-hostpath-attacher-0" [ad0efe3a-8c72-46db-9ed8-35a46fba41f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:20.879897   15521 system_pods.go:61] "csi-hostpath-resizer-0" [e357dfe7-127b-4f18-90e3-beb7846c05cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:20.879906   15521 system_pods.go:61] "csi-hostpathplugin-l4gsf" [de45bd42-06e1-4387-ba3f-4d6a477b4823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:20.879911   15521 system_pods.go:61] "etcd-addons-823099" [c9add526-f518-4303-b016-3f95bd8c222a] Running
	I0923 23:39:20.879914   15521 system_pods.go:61] "kube-apiserver-addons-823099" [8788c6f4-114f-4c6c-928b-8ca58300c969] Running
	I0923 23:39:20.879918   15521 system_pods.go:61] "kube-controller-manager-addons-823099" [726e0154-67e9-4c92-9bac-b577104b0d12] Running
	I0923 23:39:20.879923   15521 system_pods.go:61] "kube-ingress-dns-minikube" [1194cadb-80b1-4fad-b99a-0afbc0be0b40] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 23:39:20.879926   15521 system_pods.go:61] "kube-proxy-pgclm" [3d47a25a-ab05-4197-975a-88bb7e1f9834] Running
	I0923 23:39:20.879929   15521 system_pods.go:61] "kube-scheduler-addons-823099" [193d28ff-87b2-4578-903c-e74dcea5c006] Running
	I0923 23:39:20.879939   15521 system_pods.go:61] "metrics-server-84c5f94fbc-gpzsm" [d5937c63-7f30-477a-a36e-e7e6cb8c64e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:20.879951   15521 system_pods.go:61] "nvidia-device-plugin-daemonset-2dqft" [c5e363a8-697b-4396-acf2-c41232b01445] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 23:39:20.879957   15521 system_pods.go:61] "registry-66c9cd494c-h5ntb" [67fc5fdd-03ae-44c9-8e43-0042bd142349] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:20.879964   15521 system_pods.go:61] "registry-proxy-dc579" [76bec57d-6868-4098-a291-8c38dda98afc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:20.879969   15521 system_pods.go:61] "snapshot-controller-56fcc65765-2lpn2" [6ea26c65-7a9a-4d74-af4b-8f23ecc36bab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:20.879974   15521 system_pods.go:61] "snapshot-controller-56fcc65765-9mcdf" [bc592ae3-b020-465c-b0e9-c739e2321360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:20.879980   15521 system_pods.go:61] "storage-provisioner" [25d0944a-e6b3-429b-bb81-22672fb100bd] Running
	I0923 23:39:20.879986   15521 system_pods.go:74] duration metric: took 153.340922ms to wait for pod list to return data ...
	I0923 23:39:20.879996   15521 default_sa.go:34] waiting for default service account to be created ...
	I0923 23:39:20.918654   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.075277   15521 default_sa.go:45] found service account: "default"
	I0923 23:39:21.075308   15521 default_sa.go:55] duration metric: took 195.307316ms for default service account to be created ...
	I0923 23:39:21.075318   15521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 23:39:21.093994   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.094405   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.281184   15521 system_pods.go:86] 17 kube-system pods found
	I0923 23:39:21.281221   15521 system_pods.go:89] "coredns-7c65d6cfc9-h4m6q" [e5a66fda-ace2-434e-82fb-3d9d66fac49f] Running
	I0923 23:39:21.281233   15521 system_pods.go:89] "csi-hostpath-attacher-0" [ad0efe3a-8c72-46db-9ed8-35a46fba41f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:21.281242   15521 system_pods.go:89] "csi-hostpath-resizer-0" [e357dfe7-127b-4f18-90e3-beb7846c05cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:21.281258   15521 system_pods.go:89] "csi-hostpathplugin-l4gsf" [de45bd42-06e1-4387-ba3f-4d6a477b4823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:21.281268   15521 system_pods.go:89] "etcd-addons-823099" [c9add526-f518-4303-b016-3f95bd8c222a] Running
	I0923 23:39:21.281274   15521 system_pods.go:89] "kube-apiserver-addons-823099" [8788c6f4-114f-4c6c-928b-8ca58300c969] Running
	I0923 23:39:21.281279   15521 system_pods.go:89] "kube-controller-manager-addons-823099" [726e0154-67e9-4c92-9bac-b577104b0d12] Running
	I0923 23:39:21.281288   15521 system_pods.go:89] "kube-ingress-dns-minikube" [1194cadb-80b1-4fad-b99a-0afbc0be0b40] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 23:39:21.281293   15521 system_pods.go:89] "kube-proxy-pgclm" [3d47a25a-ab05-4197-975a-88bb7e1f9834] Running
	I0923 23:39:21.281299   15521 system_pods.go:89] "kube-scheduler-addons-823099" [193d28ff-87b2-4578-903c-e74dcea5c006] Running
	I0923 23:39:21.281306   15521 system_pods.go:89] "metrics-server-84c5f94fbc-gpzsm" [d5937c63-7f30-477a-a36e-e7e6cb8c64e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:21.281316   15521 system_pods.go:89] "nvidia-device-plugin-daemonset-2dqft" [c5e363a8-697b-4396-acf2-c41232b01445] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 23:39:21.281333   15521 system_pods.go:89] "registry-66c9cd494c-h5ntb" [67fc5fdd-03ae-44c9-8e43-0042bd142349] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:21.281341   15521 system_pods.go:89] "registry-proxy-dc579" [76bec57d-6868-4098-a291-8c38dda98afc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:21.281349   15521 system_pods.go:89] "snapshot-controller-56fcc65765-2lpn2" [6ea26c65-7a9a-4d74-af4b-8f23ecc36bab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:21.281358   15521 system_pods.go:89] "snapshot-controller-56fcc65765-9mcdf" [bc592ae3-b020-465c-b0e9-c739e2321360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:21.281363   15521 system_pods.go:89] "storage-provisioner" [25d0944a-e6b3-429b-bb81-22672fb100bd] Running
	I0923 23:39:21.281373   15521 system_pods.go:126] duration metric: took 206.049564ms to wait for k8s-apps to be running ...
	I0923 23:39:21.281382   15521 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 23:39:21.281439   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:39:21.331801   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:21.336577   15521 system_svc.go:56] duration metric: took 55.186723ms WaitForService to wait for kubelet
	I0923 23:39:21.336605   15521 kubeadm.go:582] duration metric: took 13.652846646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:39:21.336621   15521 node_conditions.go:102] verifying NodePressure condition ...
	I0923 23:39:21.419377   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.475488   15521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 23:39:21.475526   15521 node_conditions.go:123] node cpu capacity is 2
	I0923 23:39:21.475539   15521 node_conditions.go:105] duration metric: took 138.911431ms to run NodePressure ...
	I0923 23:39:21.475552   15521 start.go:241] waiting for startup goroutines ...
	I0923 23:39:21.596433   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.596900   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.832085   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:21.919995   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.094469   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.094632   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.332058   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:22.418713   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.593037   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.593680   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.906061   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.007978   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.094529   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.097114   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.332565   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.419583   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.593672   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.593683   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.838655   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.940369   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.094234   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.094445   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.332440   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:24.419984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.594437   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.594618   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.832486   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:24.919747   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.093182   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.093674   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.333709   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:25.418934   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.593328   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.593509   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.833795   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:25.919508   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.095779   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.096176   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.332478   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:26.420244   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.592803   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.592852   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.832139   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:26.919522   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.093698   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.094342   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.332730   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:27.419502   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.593345   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.593632   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.831834   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:27.921584   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.096645   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.097094   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.332417   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:28.420270   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.593381   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.594222   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.832460   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:28.920981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.094116   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.095338   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.332575   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:29.418135   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.592957   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.593378   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.832141   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:29.919193   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.094376   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.094610   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.331854   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:30.418982   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.631569   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.632124   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.831219   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:30.920259   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.093449   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.093941   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.331877   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:31.420541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.593048   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.593342   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.832378   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:31.920762   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.098506   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.099810   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.332194   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:32.420510   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.593182   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.594918   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.832529   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:32.918771   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.093326   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.094439   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.333534   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:33.419199   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.592859   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.593822   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.832270   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:33.919972   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.093090   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.093582   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.332317   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:34.419955   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.593634   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.593974   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.831974   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:34.919981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.095441   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.095574   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.332597   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:35.419105   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.597103   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.598610   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.832611   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:35.918515   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.096274   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:36.096962   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.332610   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:36.418275   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.593642   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:36.593746   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.831957   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:36.918919   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.092996   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:37.094759   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.332016   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:37.419671   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.593331   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:37.595578   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.834102   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:37.920878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.094370   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:38.095095   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.331397   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:38.419908   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.593717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:38.594107   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.832074   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:38.919327   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.100170   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:39.105269   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.332638   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:39.420123   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.593249   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:39.593947   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.832313   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:39.934720   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.101376   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.101425   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:40.333365   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:40.420009   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.594942   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:40.595025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.833104   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:40.934806   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.096251   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:41.096260   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.332277   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:41.419410   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.592946   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:41.593974   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.832170   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:41.919227   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.097743   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.098213   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:42.332232   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:42.419177   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.593758   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:42.593875   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.832085   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:42.919621   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.094464   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:43.095025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.333021   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:43.419417   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.593281   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:43.594091   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.833444   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:43.920229   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.094691   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:44.096056   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.333071   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:44.418650   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.593421   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.594195   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:44.831531   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:44.920239   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.093437   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.095439   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:45.332168   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:45.419471   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.593901   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:45.594317   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.831984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:45.919515   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.094625   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:46.094773   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:46.331386   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:46.419464   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.592656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:46.592778   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.151142   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.153387   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.154491   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:47.154846   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.332656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.418895   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.592742   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.593598   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:47.832577   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.918632   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:48.094668   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:48.094918   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:48.332151   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:48.419591   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:48.592271   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:48.593354   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:48.832266   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:48.918810   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.094750   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:49.094891   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.331944   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:49.419208   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.592843   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:49.593229   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.832432   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:49.920038   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.102686   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:50.104285   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.332178   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:50.420344   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.593984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:50.594056   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.831923   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:50.918641   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.095025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.096939   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:51.332546   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:51.419516   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.592980   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:51.594380   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.832001   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:51.921419   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.101749   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:52.102309   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.332228   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:52.419595   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.593016   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:52.593128   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.832003   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:52.919630   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.094969   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:53.095135   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.331766   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:53.418814   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.593958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:53.594088   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.832408   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:53.919175   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.098190   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:54.098600   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:54.332298   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:54.420609   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.592767   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:54.593349   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:54.832382   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:54.920230   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.094591   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:55.094839   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.332431   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:55.433787   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.593168   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:55.593371   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.832283   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:55.919461   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.093372   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:56.093870   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.331722   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:56.418785   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.594030   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:56.594601   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.833680   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:56.918880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.096144   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:57.096359   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.332149   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.418862   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.593466   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:57.593899   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.832901   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.919069   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.097832   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:58.098492   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.331809   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.419172   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.594374   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:58.594557   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.832190   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.919483   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.095468   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:59.095749   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.332135   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.419091   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.593927   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:59.594515   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.831815   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.919106   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.512087   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:00.512527   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.512554   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.513598   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.593901   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.595207   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:00.834143   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.941222   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.095958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:01.097955   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.332030   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.420181   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.593185   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:01.593891   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.832201   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.919404   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.094442   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:02.094695   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.332203   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.419407   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.592715   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:02.592806   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.831864   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.919302   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:03.093356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:03.095261   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:03.331951   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:03.419462   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:03.593257   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:03.594217   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.004211   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.007581   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:04.094485   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.096445   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:04.332624   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.418492   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:04.601985   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.615874   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:04.833660   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.918788   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:05.092856   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:05.092889   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.331911   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.419042   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:05.592983   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:05.593592   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.832164   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.930850   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:06.095313   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.095850   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:06.332770   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.419623   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:06.595241   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:06.598108   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.831586   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.923862   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:07.094981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:07.095013   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.332001   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.419422   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:07.592356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:07.592854   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.832579   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.921160   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:08.093155   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:08.093461   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.332206   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.420123   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:08.594084   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:08.594501   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.832833   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.918969   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:09.095290   15521 kapi.go:107] duration metric: took 54.006756194s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 23:40:09.096731   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.331593   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.419268   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:09.593290   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.832184   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.919379   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:10.206829   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.332592   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.418826   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:10.597305   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.833495   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.936556   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:11.093468   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.331762   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.419043   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:11.593818   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.831965   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.919356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:12.095949   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.332439   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.419717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:12.593847   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.833772   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.936727   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:13.095359   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:13.332979   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.434589   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:13.593982   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:13.833463   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.921413   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:14.107863   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:14.331881   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.418472   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:14.592625   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:14.832074   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.919102   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:15.151319   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:15.331731   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.418730   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:15.592769   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:15.832559   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.919783   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:16.094071   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:16.332982   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.420635   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:16.596117   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:16.832581   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.918622   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:17.094831   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:17.331470   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.419656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:17.594098   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:17.832476   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.918799   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:18.289234   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:18.332999   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.419337   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:18.593958   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:18.831972   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.918707   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:19.093792   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:19.332292   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.420611   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:19.593588   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:19.831910   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.918861   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:20.093950   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:20.332717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.436822   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:20.595463   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:20.832311   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.935013   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:21.096203   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:21.331541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.422657   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:21.598324   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:21.831455   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.919629   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:22.096231   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:22.331596   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.418599   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:22.609832   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:22.833773   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.935924   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:23.096601   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:23.340106   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.427732   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:23.594048   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:23.832622   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.919229   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:24.093122   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:24.331790   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.418786   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:24.593043   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:24.833183   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.918861   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:25.094139   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:25.334542   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.576086   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:25.593252   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:25.832880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.918530   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:26.092931   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:26.332596   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.419989   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:26.594948   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:26.932785   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.935292   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:27.093377   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:27.332423   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.421072   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:27.593187   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:27.832254   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.919838   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:28.093230   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:28.392143   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.687547   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:28.689317   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:28.832925   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.918921   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:29.100236   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:29.332915   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.420261   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:29.600887   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:29.833156   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.920177   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:30.093272   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:30.331488   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.418456   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:30.592224   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:30.832145   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.943704   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:31.134913   15521 kapi.go:107] duration metric: took 1m16.046381203s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 23:40:31.332777   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:31.418878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:31.831745   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:31.933578   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:32.332878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:32.418865   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:32.831981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:32.919636   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:33.331958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:33.433535   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:33.834818   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.031559   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:34.332506   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.419243   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:34.832458   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.919551   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:35.332538   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.419333   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:35.831854   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.919140   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:36.332139   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.419385   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:36.831428   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.933407   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:37.332127   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:37.419248   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:37.834890   15521 kapi.go:107] duration metric: took 1m19.006594431s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 23:40:37.837227   15521 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-823099 cluster.
	I0923 23:40:37.838804   15521 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 23:40:37.840390   15521 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 23:40:37.936294   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:38.419888   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:38.918688   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:39.419929   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:39.918705   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:40.419944   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:40.919268   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:41.418798   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:41.920203   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:42.418923   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:42.920850   15521 kapi.go:107] duration metric: took 1m25.506584753s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 23:40:42.922731   15521 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 23:40:42.924695   15521 addons.go:510] duration metric: took 1m35.240916092s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 23:40:42.924745   15521 start.go:246] waiting for cluster config update ...
	I0923 23:40:42.924763   15521 start.go:255] writing updated cluster config ...
	I0923 23:40:42.925016   15521 ssh_runner.go:195] Run: rm -f paused
	I0923 23:40:42.977325   15521 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 23:40:42.979331   15521 out.go:177] * Done! kubectl is now configured to use "addons-823099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.537986754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135554537960334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=363f51d3-db10-4cf5-9487-abf2b2435ecc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.538877588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=beff69d2-8d28-41fb-81e0-6da265e499a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.539063073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=beff69d2-8d28-41fb-81e0-6da265e499a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.539588832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172713
4779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae9
5904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=beff69d2-8d28-41fb-81e0-6da265e499a6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.577494656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35ebde41-5432-4807-be40-053ff23a3e9f name=/runtime.v1.RuntimeService/Version
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.577601262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35ebde41-5432-4807-be40-053ff23a3e9f name=/runtime.v1.RuntimeService/Version
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.578706858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c36a71ce-2b73-4c41-abcf-b72bf22975ea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.579940047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135554579907900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c36a71ce-2b73-4c41-abcf-b72bf22975ea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.580834764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=950694c5-04bc-46c9-a7b9-855a66f989dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.580908388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=950694c5-04bc-46c9-a7b9-855a66f989dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.581186662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172713
4779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae9
5904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=950694c5-04bc-46c9-a7b9-855a66f989dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.620250213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b2ec44b-41f7-40cd-b728-41ec22c57c87 name=/runtime.v1.RuntimeService/Version
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.620367000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b2ec44b-41f7-40cd-b728-41ec22c57c87 name=/runtime.v1.RuntimeService/Version
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.621967591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=042066dc-f80e-4a94-a71f-c8c9dc8ae718 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.623078307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135554623051957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=042066dc-f80e-4a94-a71f-c8c9dc8ae718 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.623522294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cffac9d4-669f-4763-a8f5-448db4b5fe05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.623578582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cffac9d4-669f-4763-a8f5-448db4b5fe05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.623911012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172713
4779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae9
5904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cffac9d4-669f-4763-a8f5-448db4b5fe05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.661886512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf81371e-34cc-48a1-848b-dea2d2628a7e name=/runtime.v1.RuntimeService/Version
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.661972360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf81371e-34cc-48a1-848b-dea2d2628a7e name=/runtime.v1.RuntimeService/Version
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.663366986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57277756-9402-4026-a5b3-59b2700113b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.664974712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135554664943188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57277756-9402-4026-a5b3-59b2700113b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.665540132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8ca5959-769b-4689-aa81-001f7646bb86 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.665605990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8ca5959-769b-4689-aa81-001f7646bb86 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:52:34 addons-823099 crio[662]: time="2024-09-23 23:52:34.665954494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78216b74033c5ca95888e4a1fbb6bd5b02dd521f016c148a02ec8c90a1893cad,PodSandboxId:1d959a02614bbf4f31848f8b11efc2ce39b4661ef1b1324246f1861a3c26b880,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810798062438,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-2wnc4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64ca1527-9535-42ac-98cf-e6f4a1e27173,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd0ccb8e4dd85d55c486f11d6fe984cf5dc4d2303c695bc5b05525770f500b2,PodSandboxId:4b432a29341336cef179e9c1e5957e66882b4dded3c4ddc68ba3121a12d6b86c,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727134810639287949,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-t6hw4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45b6e4cc-f5cc-4955-9bdb-d0275d9f6354,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172713
4779803969605,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae9
5904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2
ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8ca5959-769b-4689-aa81-001f7646bb86 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4df081e2b365       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   cba63186abb30       hello-world-app-55bf9c44b4-cpzkz
	a477f91b06dc3       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   000b770e9460e       nginx
	74b1f1c0ea595       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   ef699a0a58d26       gcp-auth-89d5ffd79-5p9gw
	78216b74033c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   1d959a02614bb       ingress-nginx-admission-patch-2wnc4
	2dd0ccb8e4dd8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   4b432a2934133       ingress-nginx-admission-create-t6hw4
	ad6c45d33f367       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   c24a2665a62ab       metrics-server-84c5f94fbc-gpzsm
	9490eb926210d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   8d3fbd5782869       storage-provisioner
	4fec583ae4c3f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   743b6ef05346b       coredns-7c65d6cfc9-h4m6q
	8a92c92c6afdd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   e2cf37b2ed960       kube-proxy-pgclm
	f68819f7bf59d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   9992b2a049a9e       kube-controller-manager-addons-823099
	474072cb31ae5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   4c45c732428c2       kube-apiserver-addons-823099
	9f9a68d35a007       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   6600118fb556e       etcd-addons-823099
	61a194a33123e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   858af16c1b974       kube-scheduler-addons-823099
	
	
	==> coredns [4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de] <==
	[INFO] 10.244.0.5:51161 - 56746 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000030131s
	[INFO] 10.244.0.5:59845 - 51818 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171416s
	[INFO] 10.244.0.5:59845 - 28005 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00029495s
	[INFO] 10.244.0.5:48681 - 19317 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123377s
	[INFO] 10.244.0.5:48681 - 63336 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044081s
	[INFO] 10.244.0.5:58061 - 30895 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008088s
	[INFO] 10.244.0.5:58061 - 32689 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035396s
	[INFO] 10.244.0.5:38087 - 48114 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035784s
	[INFO] 10.244.0.5:38087 - 54000 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095969s
	[INFO] 10.244.0.5:49683 - 11480 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000140959s
	[INFO] 10.244.0.5:49683 - 23003 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101135s
	[INFO] 10.244.0.5:43005 - 38126 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081593s
	[INFO] 10.244.0.5:43005 - 47596 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124387s
	[INFO] 10.244.0.5:55804 - 41138 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171789s
	[INFO] 10.244.0.5:55804 - 44976 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000182833s
	[INFO] 10.244.0.5:43069 - 16307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089434s
	[INFO] 10.244.0.5:43069 - 51633 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000032833s
	[INFO] 10.244.0.21:46303 - 62968 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000606757s
	[INFO] 10.244.0.21:36097 - 35733 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000696905s
	[INFO] 10.244.0.21:56566 - 45315 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136557s
	[INFO] 10.244.0.21:57939 - 56430 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000207858s
	[INFO] 10.244.0.21:51280 - 40828 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010631s
	[INFO] 10.244.0.21:50116 - 49864 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098666s
	[INFO] 10.244.0.21:45441 - 35920 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001078608s
	[INFO] 10.244.0.21:48980 - 17136 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00159345s
	
	
	==> describe nodes <==
	Name:               addons-823099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-823099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=addons-823099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T23_39_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-823099
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 23:39:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-823099
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 23:52:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 23:50:36 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 23:50:36 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 23:50:36 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 23:50:36 +0000   Mon, 23 Sep 2024 23:39:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    addons-823099
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a6fccd6b081441ba6dbe75955b7b20d
	  System UUID:                8a6fccd6-b081-441b-a6db-e75955b7b20d
	  Boot ID:                    cf9ab547-5350-4131-950e-b30d60dc335d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-cpzkz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gcp-auth                    gcp-auth-89d5ffd79-5p9gw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-h4m6q                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-823099                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-823099             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-823099    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pgclm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-823099             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-gpzsm          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x6 over 13m)  kubelet          Node addons-823099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x6 over 13m)  kubelet          Node addons-823099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x5 over 13m)  kubelet          Node addons-823099 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-823099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-823099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-823099 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-823099 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-823099 event: Registered Node addons-823099 in Controller
	
	
	==> dmesg <==
	[  +5.264434] kauditd_printk_skb: 126 callbacks suppressed
	[  +5.568231] kauditd_printk_skb: 64 callbacks suppressed
	[Sep23 23:40] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.362606] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.102278] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.160095] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.116043] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.170005] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.039714] kauditd_printk_skb: 15 callbacks suppressed
	[Sep23 23:41] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:46] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:48] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.347854] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 23:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.843786] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.530829] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.922969] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.060717] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.416967] kauditd_printk_skb: 2 callbacks suppressed
	[ +21.356928] kauditd_printk_skb: 15 callbacks suppressed
	[Sep23 23:50] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.047412] kauditd_printk_skb: 10 callbacks suppressed
	[Sep23 23:52] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.376350] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7] <==
	{"level":"info","ts":"2024-09-23T23:40:25.559087Z","caller":"traceutil/trace.go:171","msg":"trace[1791543403] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1076; }","duration":"142.072858ms","start":"2024-09-23T23:40:25.417009Z","end":"2024-09-23T23:40:25.559082Z","steps":["trace[1791543403] 'range keys from in-memory index tree'  (duration: 141.82946ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:40:26.916555Z","caller":"traceutil/trace.go:171","msg":"trace[2109487072] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"268.210946ms","start":"2024-09-23T23:40:26.648276Z","end":"2024-09-23T23:40:26.916487Z","steps":["trace[2109487072] 'process raft request'  (duration: 267.388524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:28.672019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.58612ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:28.672067Z","caller":"traceutil/trace.go:171","msg":"trace[1280673185] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"266.654436ms","start":"2024-09-23T23:40:28.405402Z","end":"2024-09-23T23:40:28.672056Z","steps":["trace[1280673185] 'range keys from in-memory index tree'  (duration: 266.517094ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:34.016260Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.157586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:34.016381Z","caller":"traceutil/trace.go:171","msg":"trace[611313623] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"112.292285ms","start":"2024-09-23T23:40:33.904078Z","end":"2024-09-23T23:40:34.016370Z","steps":["trace[611313623] 'range keys from in-memory index tree'  (duration: 112.015896ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:51.932772Z","caller":"traceutil/trace.go:171","msg":"trace[938951164] linearizableReadLoop","detail":"{readStateIndex:2080; appliedIndex:2079; }","duration":"354.39626ms","start":"2024-09-23T23:48:51.578308Z","end":"2024-09-23T23:48:51.932704Z","steps":["trace[938951164] 'read index received'  (duration: 354.29951ms)","trace[938951164] 'applied index is now lower than readState.Index'  (duration: 95.406µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T23:48:51.932778Z","caller":"traceutil/trace.go:171","msg":"trace[488687598] transaction","detail":"{read_only:false; response_revision:1941; number_of_response:1; }","duration":"381.82676ms","start":"2024-09-23T23:48:51.550869Z","end":"2024-09-23T23:48:51.932696Z","steps":["trace[488687598] 'process raft request'  (duration: 381.708173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.933377Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T23:48:51.550851Z","time spent":"382.387698ms","remote":"127.0.0.1:42030","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1940 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-23T23:48:51.933874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.560182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-09-23T23:48:51.934178Z","caller":"traceutil/trace.go:171","msg":"trace[470184168] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:1941; }","duration":"355.861174ms","start":"2024-09-23T23:48:51.578304Z","end":"2024-09-23T23:48:51.934165Z","steps":["trace[470184168] 'agreement among raft nodes before linearized reading'  (duration: 355.490488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.934287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T23:48:51.578272Z","time spent":"356.004044ms","remote":"127.0.0.1:41984","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":597,"request content":"key:\"/registry/namespaces/gadget\" "}
	{"level":"warn","ts":"2024-09-23T23:48:51.934489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.892301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:48:51.937084Z","caller":"traceutil/trace.go:171","msg":"trace[537662719] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:1941; }","duration":"222.90761ms","start":"2024-09-23T23:48:51.714161Z","end":"2024-09-23T23:48:51.937069Z","steps":["trace[537662719] 'agreement among raft nodes before linearized reading'  (duration: 218.829971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.934806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.030994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:48:51.937364Z","caller":"traceutil/trace.go:171","msg":"trace[1456298499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1941; }","duration":"140.590398ms","start":"2024-09-23T23:48:51.796765Z","end":"2024-09-23T23:48:51.937356Z","steps":["trace[1456298499] 'agreement among raft nodes before linearized reading'  (duration: 138.021442ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:58.904119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1508}
	{"level":"info","ts":"2024-09-23T23:48:58.947160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1508,"took":"42.440235ms","hash":2968136522,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3551232,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T23:48:58.947271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2968136522,"revision":1508,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T23:50:01.039784Z","caller":"traceutil/trace.go:171","msg":"trace[1259820916] linearizableReadLoop","detail":"{readStateIndex:2578; appliedIndex:2577; }","duration":"221.284347ms","start":"2024-09-23T23:50:00.818434Z","end":"2024-09-23T23:50:01.039718Z","steps":["trace[1259820916] 'read index received'  (duration: 221.162419ms)","trace[1259820916] 'applied index is now lower than readState.Index'  (duration: 121.469µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T23:50:01.040067Z","caller":"traceutil/trace.go:171","msg":"trace[1983157388] transaction","detail":"{read_only:false; response_revision:2414; number_of_response:1; }","duration":"254.058116ms","start":"2024-09-23T23:50:00.785999Z","end":"2024-09-23T23:50:01.040058Z","steps":["trace[1983157388] 'process raft request'  (duration: 253.637047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:50:01.040336Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.886806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-hostpathplugin-health-monitor-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:50:01.040368Z","caller":"traceutil/trace.go:171","msg":"trace[731576186] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-hostpathplugin-health-monitor-role; range_end:; response_count:0; response_revision:2414; }","duration":"221.941895ms","start":"2024-09-23T23:50:00.818418Z","end":"2024-09-23T23:50:01.040359Z","steps":["trace[731576186] 'agreement among raft nodes before linearized reading'  (duration: 221.835534ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:50:01.040472Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.507458ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:50:01.040485Z","caller":"traceutil/trace.go:171","msg":"trace[587039362] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2414; }","duration":"194.522207ms","start":"2024-09-23T23:50:00.845959Z","end":"2024-09-23T23:50:01.040481Z","steps":["trace[587039362] 'agreement among raft nodes before linearized reading'  (duration: 194.501032ms)"],"step_count":1}
	
	
	==> gcp-auth [74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab] <==
	2024/09/23 23:40:43 Ready to write response ...
	2024/09/23 23:40:43 Ready to marshal response ...
	2024/09/23 23:40:43 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:57 Ready to marshal response ...
	2024/09/23 23:48:57 Ready to write response ...
	2024/09/23 23:49:08 Ready to marshal response ...
	2024/09/23 23:49:08 Ready to write response ...
	2024/09/23 23:49:08 Ready to marshal response ...
	2024/09/23 23:49:08 Ready to write response ...
	2024/09/23 23:49:19 Ready to marshal response ...
	2024/09/23 23:49:19 Ready to write response ...
	2024/09/23 23:49:28 Ready to marshal response ...
	2024/09/23 23:49:28 Ready to write response ...
	2024/09/23 23:49:49 Ready to marshal response ...
	2024/09/23 23:49:49 Ready to write response ...
	2024/09/23 23:50:01 Ready to marshal response ...
	2024/09/23 23:50:01 Ready to write response ...
	2024/09/23 23:52:24 Ready to marshal response ...
	2024/09/23 23:52:24 Ready to write response ...
	
	
	==> kernel <==
	 23:52:35 up 14 min,  0 users,  load average: 0.23, 0.36, 0.36
	Linux addons-823099 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0923 23:40:43.868287       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.117.164:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.117.164:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.117.164:443: connect: connection refused" logger="UnhandledError"
	I0923 23:40:43.895370       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 23:48:46.697802       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.162.121"}
	I0923 23:48:52.010546       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 23:48:53.182523       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0923 23:49:35.935062       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 23:49:42.049169       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 23:50:00.784922       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 23:50:01.287315       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.155.191"}
	I0923 23:50:06.825652       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.825802       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:06.916381       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.916612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:06.930144       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.930205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:06.977265       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.977309       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:07.004023       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:07.004068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 23:50:07.979088       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 23:50:08.002119       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0923 23:50:08.006494       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0923 23:52:24.308097       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.245.50"}
	
	
	==> kube-controller-manager [f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6] <==
	W0923 23:51:19.165252       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:19.165453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:51:27.781077       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:27.781166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:51:31.764599       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:31.764665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:51:32.731077       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:32.731130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:52:16.910696       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:52:16.910843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:52:18.410830       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:52:18.410984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:52:22.025166       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:52:22.025310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:52:24.141297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.708871ms"
	I0923 23:52:24.166652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="25.24309ms"
	I0923 23:52:24.166772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.763µs"
	I0923 23:52:24.170130       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.203µs"
	I0923 23:52:26.625479       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0923 23:52:26.633477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="13.905µs"
	I0923 23:52:26.643212       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0923 23:52:27.956206       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.527889ms"
	I0923 23:52:27.957060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.015µs"
	W0923 23:52:30.936834       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:52:30.936887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 23:39:10.263461       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 23:39:10.290295       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.29"]
	E0923 23:39:10.290387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:39:10.374009       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 23:39:10.374057       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 23:39:10.374082       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:39:10.378689       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:39:10.379053       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:39:10.379077       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:39:10.380385       1 config.go:199] "Starting service config controller"
	I0923 23:39:10.380428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:39:10.380516       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:39:10.380522       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:39:10.381090       1 config.go:328] "Starting node config controller"
	I0923 23:39:10.381097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:39:10.480784       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 23:39:10.480823       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:39:10.481148       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960] <==
	W0923 23:39:00.991081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 23:39:00.991286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:00.992946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 23:39:00.993078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.018368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 23:39:01.018501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.040390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 23:39:01.040489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.048983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.049065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.052890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 23:39:01.053031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.108077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 23:39:01.108124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.219095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.219241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.237429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.237504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.286444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 23:39:01.286579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.476657       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:39:01.476716       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 23:39:01.491112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 23:39:01.491224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 23:39:03.306204       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 23:52:24 addons-823099 kubelet[1203]: I0923 23:52:24.196500    1203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqw7\" (UniqueName: \"kubernetes.io/projected/ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0-kube-api-access-jtqw7\") pod \"hello-world-app-55bf9c44b4-cpzkz\" (UID: \"ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0\") " pod="default/hello-world-app-55bf9c44b4-cpzkz"
	Sep 23 23:52:25 addons-823099 kubelet[1203]: I0923 23:52:25.304480    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c98v\" (UniqueName: \"kubernetes.io/projected/1194cadb-80b1-4fad-b99a-0afbc0be0b40-kube-api-access-7c98v\") pod \"1194cadb-80b1-4fad-b99a-0afbc0be0b40\" (UID: \"1194cadb-80b1-4fad-b99a-0afbc0be0b40\") "
	Sep 23 23:52:25 addons-823099 kubelet[1203]: I0923 23:52:25.306465    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1194cadb-80b1-4fad-b99a-0afbc0be0b40-kube-api-access-7c98v" (OuterVolumeSpecName: "kube-api-access-7c98v") pod "1194cadb-80b1-4fad-b99a-0afbc0be0b40" (UID: "1194cadb-80b1-4fad-b99a-0afbc0be0b40"). InnerVolumeSpecName "kube-api-access-7c98v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:52:25 addons-823099 kubelet[1203]: I0923 23:52:25.405349    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7c98v\" (UniqueName: \"kubernetes.io/projected/1194cadb-80b1-4fad-b99a-0afbc0be0b40-kube-api-access-7c98v\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:52:25 addons-823099 kubelet[1203]: I0923 23:52:25.916807    1203 scope.go:117] "RemoveContainer" containerID="9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0"
	Sep 23 23:52:25 addons-823099 kubelet[1203]: I0923 23:52:25.959353    1203 scope.go:117] "RemoveContainer" containerID="9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0"
	Sep 23 23:52:25 addons-823099 kubelet[1203]: E0923 23:52:25.959923    1203 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0\": container with ID starting with 9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0 not found: ID does not exist" containerID="9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0"
	Sep 23 23:52:25 addons-823099 kubelet[1203]: I0923 23:52:25.960035    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0"} err="failed to get container status \"9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0\": rpc error: code = NotFound desc = could not find container \"9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0\": container with ID starting with 9bcdf5e1463fbbd6c72365fb6f4f0b12b8d0aa1cb56e559f9e8d68252442f6a0 not found: ID does not exist"
	Sep 23 23:52:26 addons-823099 kubelet[1203]: I0923 23:52:26.716045    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1194cadb-80b1-4fad-b99a-0afbc0be0b40" path="/var/lib/kubelet/pods/1194cadb-80b1-4fad-b99a-0afbc0be0b40/volumes"
	Sep 23 23:52:26 addons-823099 kubelet[1203]: I0923 23:52:26.716588    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b6e4cc-f5cc-4955-9bdb-d0275d9f6354" path="/var/lib/kubelet/pods/45b6e4cc-f5cc-4955-9bdb-d0275d9f6354/volumes"
	Sep 23 23:52:26 addons-823099 kubelet[1203]: I0923 23:52:26.717110    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ca1527-9535-42ac-98cf-e6f4a1e27173" path="/var/lib/kubelet/pods/64ca1527-9535-42ac-98cf-e6f4a1e27173/volumes"
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.839276    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrqw4\" (UniqueName: \"kubernetes.io/projected/28f50294-29c5-4d74-8c7e-4b7b748d87b1-kube-api-access-hrqw4\") pod \"28f50294-29c5-4d74-8c7e-4b7b748d87b1\" (UID: \"28f50294-29c5-4d74-8c7e-4b7b748d87b1\") "
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.839358    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f50294-29c5-4d74-8c7e-4b7b748d87b1-webhook-cert\") pod \"28f50294-29c5-4d74-8c7e-4b7b748d87b1\" (UID: \"28f50294-29c5-4d74-8c7e-4b7b748d87b1\") "
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.841212    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f50294-29c5-4d74-8c7e-4b7b748d87b1-kube-api-access-hrqw4" (OuterVolumeSpecName: "kube-api-access-hrqw4") pod "28f50294-29c5-4d74-8c7e-4b7b748d87b1" (UID: "28f50294-29c5-4d74-8c7e-4b7b748d87b1"). InnerVolumeSpecName "kube-api-access-hrqw4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.842785    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f50294-29c5-4d74-8c7e-4b7b748d87b1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "28f50294-29c5-4d74-8c7e-4b7b748d87b1" (UID: "28f50294-29c5-4d74-8c7e-4b7b748d87b1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.940051    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hrqw4\" (UniqueName: \"kubernetes.io/projected/28f50294-29c5-4d74-8c7e-4b7b748d87b1-kube-api-access-hrqw4\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.940076    1203 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28f50294-29c5-4d74-8c7e-4b7b748d87b1-webhook-cert\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.943935    1203 scope.go:117] "RemoveContainer" containerID="f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a"
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.967874    1203 scope.go:117] "RemoveContainer" containerID="f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a"
	Sep 23 23:52:29 addons-823099 kubelet[1203]: E0923 23:52:29.968537    1203 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a\": container with ID starting with f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a not found: ID does not exist" containerID="f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a"
	Sep 23 23:52:29 addons-823099 kubelet[1203]: I0923 23:52:29.968608    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a"} err="failed to get container status \"f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a\": rpc error: code = NotFound desc = could not find container \"f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a\": container with ID starting with f9c92c116a3db62103d76eeee96f945e98b377332915558963435b3c40a4249a not found: ID does not exist"
	Sep 23 23:52:30 addons-823099 kubelet[1203]: I0923 23:52:30.716464    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f50294-29c5-4d74-8c7e-4b7b748d87b1" path="/var/lib/kubelet/pods/28f50294-29c5-4d74-8c7e-4b7b748d87b1/volumes"
	Sep 23 23:52:31 addons-823099 kubelet[1203]: E0923 23:52:31.714327    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="764b3703-5e4f-45c1-941e-d137062ab058"
	Sep 23 23:52:33 addons-823099 kubelet[1203]: E0923 23:52:33.060024    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135553059578652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:52:33 addons-823099 kubelet[1203]: E0923 23:52:33.060048    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135553059578652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b] <==
	I0923 23:39:15.614353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 23:39:15.653018       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 23:39:15.653076       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 23:39:15.688317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 23:39:15.688955       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68f268eb-f84e-4f3d-800b-baa6449c8a15", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9 became leader
	I0923 23:39:15.689716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9!
	I0923 23:39:15.790555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-823099 -n addons-823099
helpers_test.go:261: (dbg) Run:  kubectl --context addons-823099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-823099 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-823099 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-823099/192.168.39.29
	Start Time:       Mon, 23 Sep 2024 23:40:43 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvbxz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nvbxz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/busybox to addons-823099
	  Normal   Pulling    10m (x4 over 11m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    99s (x42 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.705735ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gpzsm" [d5937c63-7f30-477a-a36e-e7e6cb8c64e5] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016634647s
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (96.368084ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 9m44.043548869s

                                                
                                                
** /stderr **
I0923 23:48:51.045234   14793 retry.go:31] will retry after 2.599799284s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (71.043758ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 9m46.715403666s

                                                
                                                
** /stderr **
I0923 23:48:53.717134   14793 retry.go:31] will retry after 3.09136107s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (69.063149ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 9m49.877262713s

                                                
                                                
** /stderr **
I0923 23:48:56.878894   14793 retry.go:31] will retry after 4.203167848s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (69.932269ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 9m54.150496777s

                                                
                                                
** /stderr **
I0923 23:49:01.152275   14793 retry.go:31] will retry after 12.565098759s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (71.611602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 10m6.788278894s

                                                
                                                
** /stderr **
I0923 23:49:13.789935   14793 retry.go:31] will retry after 18.099087041s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (67.958298ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 10m24.955813695s

                                                
                                                
** /stderr **
I0923 23:49:31.957640   14793 retry.go:31] will retry after 33.635707639s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (65.885221ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 10m58.657777733s

                                                
                                                
** /stderr **
I0923 23:50:05.659693   14793 retry.go:31] will retry after 50.320290921s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (62.896754ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 11m49.043336548s

                                                
                                                
** /stderr **
I0923 23:50:56.045277   14793 retry.go:31] will retry after 39.075451438s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (69.392198ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 12m28.189326933s

                                                
                                                
** /stderr **
I0923 23:51:35.191067   14793 retry.go:31] will retry after 1m8.958063397s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (70.004832ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 13m37.217776485s

                                                
                                                
** /stderr **
I0923 23:52:44.219529   14793 retry.go:31] will retry after 34.02876768s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (65.014312ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 14m11.315730613s

                                                
                                                
** /stderr **
I0923 23:53:18.317810   14793 retry.go:31] will retry after 1m14.384303539s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-823099 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-823099 top pods -n kube-system: exit status 1 (68.241224ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-h4m6q, age: 15m25.772283732s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-823099 -n addons-823099
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 logs -n 25: (1.326631445s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-446089                                                                     | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-098425                                                                     | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-446089                                                                     | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-013301 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | binary-mirror-013301                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39559                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-013301                                                                     | binary-mirror-013301 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-823099 --wait=true                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:48 UTC |
	|         | -p addons-823099                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:48 UTC |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:48 UTC | 23 Sep 24 23:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | -p addons-823099                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | addons-823099                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-823099 ssh cat                                                                       | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | /opt/local-path-provisioner/pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-823099 ip                                                                            | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:49 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823099 addons                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:49 UTC | 23 Sep 24 23:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-823099 addons                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-823099 ssh curl -s                                                                   | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-823099 ip                                                                            | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:52 UTC | 23 Sep 24 23:52 UTC |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:52 UTC | 23 Sep 24 23:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-823099 addons disable                                                                | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:52 UTC | 23 Sep 24 23:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-823099 addons                                                                        | addons-823099        | jenkins | v1.34.0 | 23 Sep 24 23:54 UTC | 23 Sep 24 23:54 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:38:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:38:22.858727   15521 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:38:22.858952   15521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:22.858959   15521 out.go:358] Setting ErrFile to fd 2...
	I0923 23:38:22.858964   15521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:22.859165   15521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:38:22.859782   15521 out.go:352] Setting JSON to false
	I0923 23:38:22.860641   15521 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1247,"bootTime":1727133456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:38:22.860727   15521 start.go:139] virtualization: kvm guest
	I0923 23:38:22.862749   15521 out.go:177] * [addons-823099] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:38:22.863989   15521 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:38:22.863991   15521 notify.go:220] Checking for updates...
	I0923 23:38:22.865162   15521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:38:22.866358   15521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:38:22.867535   15521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:22.868620   15521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:38:22.869743   15521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:38:22.870899   15521 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:38:22.903588   15521 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 23:38:22.904660   15521 start.go:297] selected driver: kvm2
	I0923 23:38:22.904673   15521 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:38:22.904687   15521 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:38:22.905400   15521 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:22.905500   15521 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:38:22.920929   15521 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:38:22.920979   15521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:38:22.921207   15521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:38:22.921237   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:38:22.921285   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:38:22.921293   15521 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:38:22.921344   15521 start.go:340] cluster config:
	{Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:22.921436   15521 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:22.923320   15521 out.go:177] * Starting "addons-823099" primary control-plane node in "addons-823099" cluster
	I0923 23:38:22.925095   15521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:22.925153   15521 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:38:22.925164   15521 cache.go:56] Caching tarball of preloaded images
	I0923 23:38:22.925267   15521 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 23:38:22.925281   15521 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 23:38:22.925621   15521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json ...
	I0923 23:38:22.925656   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json: {Name:mk1d938d4754f5dff88f0edaafe7f2a9698c52bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:22.925841   15521 start.go:360] acquireMachinesLock for addons-823099: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 23:38:22.925907   15521 start.go:364] duration metric: took 50.085µs to acquireMachinesLock for "addons-823099"
	I0923 23:38:22.926043   15521 start.go:93] Provisioning new machine with config: &{Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:38:22.926135   15521 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 23:38:22.928519   15521 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 23:38:22.928694   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:38:22.928738   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:38:22.943674   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0923 23:38:22.944239   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:38:22.944884   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:38:22.944906   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:38:22.945372   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:38:22.945633   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:22.945846   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:22.946076   15521 start.go:159] libmachine.API.Create for "addons-823099" (driver="kvm2")
	I0923 23:38:22.946111   15521 client.go:168] LocalClient.Create starting
	I0923 23:38:22.946149   15521 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0923 23:38:23.071878   15521 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0923 23:38:23.150247   15521 main.go:141] libmachine: Running pre-create checks...
	I0923 23:38:23.150273   15521 main.go:141] libmachine: (addons-823099) Calling .PreCreateCheck
	I0923 23:38:23.150796   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:23.151207   15521 main.go:141] libmachine: Creating machine...
	I0923 23:38:23.151222   15521 main.go:141] libmachine: (addons-823099) Calling .Create
	I0923 23:38:23.151379   15521 main.go:141] libmachine: (addons-823099) Creating KVM machine...
	I0923 23:38:23.152659   15521 main.go:141] libmachine: (addons-823099) DBG | found existing default KVM network
	I0923 23:38:23.153379   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.153219   15543 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0923 23:38:23.153400   15521 main.go:141] libmachine: (addons-823099) DBG | created network xml: 
	I0923 23:38:23.153412   15521 main.go:141] libmachine: (addons-823099) DBG | <network>
	I0923 23:38:23.153420   15521 main.go:141] libmachine: (addons-823099) DBG |   <name>mk-addons-823099</name>
	I0923 23:38:23.153428   15521 main.go:141] libmachine: (addons-823099) DBG |   <dns enable='no'/>
	I0923 23:38:23.153434   15521 main.go:141] libmachine: (addons-823099) DBG |   
	I0923 23:38:23.153445   15521 main.go:141] libmachine: (addons-823099) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 23:38:23.153455   15521 main.go:141] libmachine: (addons-823099) DBG |     <dhcp>
	I0923 23:38:23.153464   15521 main.go:141] libmachine: (addons-823099) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 23:38:23.153470   15521 main.go:141] libmachine: (addons-823099) DBG |     </dhcp>
	I0923 23:38:23.153485   15521 main.go:141] libmachine: (addons-823099) DBG |   </ip>
	I0923 23:38:23.153497   15521 main.go:141] libmachine: (addons-823099) DBG |   
	I0923 23:38:23.153527   15521 main.go:141] libmachine: (addons-823099) DBG | </network>
	I0923 23:38:23.153541   15521 main.go:141] libmachine: (addons-823099) DBG | 
	I0923 23:38:23.159364   15521 main.go:141] libmachine: (addons-823099) DBG | trying to create private KVM network mk-addons-823099 192.168.39.0/24...
	I0923 23:38:23.227848   15521 main.go:141] libmachine: (addons-823099) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 ...
	I0923 23:38:23.227898   15521 main.go:141] libmachine: (addons-823099) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0923 23:38:23.227909   15521 main.go:141] libmachine: (addons-823099) DBG | private KVM network mk-addons-823099 192.168.39.0/24 created
	I0923 23:38:23.227930   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.227792   15543 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:23.227962   15521 main.go:141] libmachine: (addons-823099) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0923 23:38:23.481605   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.481476   15543 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa...
	I0923 23:38:23.632238   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.632114   15543 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/addons-823099.rawdisk...
	I0923 23:38:23.632260   15521 main.go:141] libmachine: (addons-823099) DBG | Writing magic tar header
	I0923 23:38:23.632269   15521 main.go:141] libmachine: (addons-823099) DBG | Writing SSH key tar header
	I0923 23:38:23.632282   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:23.632226   15543 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 ...
	I0923 23:38:23.632439   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099
	I0923 23:38:23.632473   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099 (perms=drwx------)
	I0923 23:38:23.632484   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0923 23:38:23.632491   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0923 23:38:23.632497   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:23.632507   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0923 23:38:23.632513   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 23:38:23.632518   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0923 23:38:23.632528   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0923 23:38:23.632536   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 23:38:23.632546   15521 main.go:141] libmachine: (addons-823099) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 23:38:23.632550   15521 main.go:141] libmachine: (addons-823099) Creating domain...
	I0923 23:38:23.632558   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home/jenkins
	I0923 23:38:23.632570   15521 main.go:141] libmachine: (addons-823099) DBG | Checking permissions on dir: /home
	I0923 23:38:23.632578   15521 main.go:141] libmachine: (addons-823099) DBG | Skipping /home - not owner
	I0923 23:38:23.633510   15521 main.go:141] libmachine: (addons-823099) define libvirt domain using xml: 
	I0923 23:38:23.633532   15521 main.go:141] libmachine: (addons-823099) <domain type='kvm'>
	I0923 23:38:23.633543   15521 main.go:141] libmachine: (addons-823099)   <name>addons-823099</name>
	I0923 23:38:23.633550   15521 main.go:141] libmachine: (addons-823099)   <memory unit='MiB'>4000</memory>
	I0923 23:38:23.633564   15521 main.go:141] libmachine: (addons-823099)   <vcpu>2</vcpu>
	I0923 23:38:23.633572   15521 main.go:141] libmachine: (addons-823099)   <features>
	I0923 23:38:23.633596   15521 main.go:141] libmachine: (addons-823099)     <acpi/>
	I0923 23:38:23.633612   15521 main.go:141] libmachine: (addons-823099)     <apic/>
	I0923 23:38:23.633621   15521 main.go:141] libmachine: (addons-823099)     <pae/>
	I0923 23:38:23.633628   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.633638   15521 main.go:141] libmachine: (addons-823099)   </features>
	I0923 23:38:23.633646   15521 main.go:141] libmachine: (addons-823099)   <cpu mode='host-passthrough'>
	I0923 23:38:23.633653   15521 main.go:141] libmachine: (addons-823099)   
	I0923 23:38:23.633673   15521 main.go:141] libmachine: (addons-823099)   </cpu>
	I0923 23:38:23.633707   15521 main.go:141] libmachine: (addons-823099)   <os>
	I0923 23:38:23.633725   15521 main.go:141] libmachine: (addons-823099)     <type>hvm</type>
	I0923 23:38:23.633734   15521 main.go:141] libmachine: (addons-823099)     <boot dev='cdrom'/>
	I0923 23:38:23.633739   15521 main.go:141] libmachine: (addons-823099)     <boot dev='hd'/>
	I0923 23:38:23.633745   15521 main.go:141] libmachine: (addons-823099)     <bootmenu enable='no'/>
	I0923 23:38:23.633750   15521 main.go:141] libmachine: (addons-823099)   </os>
	I0923 23:38:23.633764   15521 main.go:141] libmachine: (addons-823099)   <devices>
	I0923 23:38:23.633771   15521 main.go:141] libmachine: (addons-823099)     <disk type='file' device='cdrom'>
	I0923 23:38:23.633779   15521 main.go:141] libmachine: (addons-823099)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/boot2docker.iso'/>
	I0923 23:38:23.633784   15521 main.go:141] libmachine: (addons-823099)       <target dev='hdc' bus='scsi'/>
	I0923 23:38:23.633791   15521 main.go:141] libmachine: (addons-823099)       <readonly/>
	I0923 23:38:23.633799   15521 main.go:141] libmachine: (addons-823099)     </disk>
	I0923 23:38:23.633811   15521 main.go:141] libmachine: (addons-823099)     <disk type='file' device='disk'>
	I0923 23:38:23.633821   15521 main.go:141] libmachine: (addons-823099)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 23:38:23.633829   15521 main.go:141] libmachine: (addons-823099)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/addons-823099.rawdisk'/>
	I0923 23:38:23.633836   15521 main.go:141] libmachine: (addons-823099)       <target dev='hda' bus='virtio'/>
	I0923 23:38:23.633841   15521 main.go:141] libmachine: (addons-823099)     </disk>
	I0923 23:38:23.633848   15521 main.go:141] libmachine: (addons-823099)     <interface type='network'>
	I0923 23:38:23.633854   15521 main.go:141] libmachine: (addons-823099)       <source network='mk-addons-823099'/>
	I0923 23:38:23.633860   15521 main.go:141] libmachine: (addons-823099)       <model type='virtio'/>
	I0923 23:38:23.633865   15521 main.go:141] libmachine: (addons-823099)     </interface>
	I0923 23:38:23.633870   15521 main.go:141] libmachine: (addons-823099)     <interface type='network'>
	I0923 23:38:23.633885   15521 main.go:141] libmachine: (addons-823099)       <source network='default'/>
	I0923 23:38:23.633892   15521 main.go:141] libmachine: (addons-823099)       <model type='virtio'/>
	I0923 23:38:23.633904   15521 main.go:141] libmachine: (addons-823099)     </interface>
	I0923 23:38:23.633919   15521 main.go:141] libmachine: (addons-823099)     <serial type='pty'>
	I0923 23:38:23.633928   15521 main.go:141] libmachine: (addons-823099)       <target port='0'/>
	I0923 23:38:23.633938   15521 main.go:141] libmachine: (addons-823099)     </serial>
	I0923 23:38:23.633945   15521 main.go:141] libmachine: (addons-823099)     <console type='pty'>
	I0923 23:38:23.633957   15521 main.go:141] libmachine: (addons-823099)       <target type='serial' port='0'/>
	I0923 23:38:23.633964   15521 main.go:141] libmachine: (addons-823099)     </console>
	I0923 23:38:23.633975   15521 main.go:141] libmachine: (addons-823099)     <rng model='virtio'>
	I0923 23:38:23.633986   15521 main.go:141] libmachine: (addons-823099)       <backend model='random'>/dev/random</backend>
	I0923 23:38:23.633996   15521 main.go:141] libmachine: (addons-823099)     </rng>
	I0923 23:38:23.634010   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.634040   15521 main.go:141] libmachine: (addons-823099)     
	I0923 23:38:23.634058   15521 main.go:141] libmachine: (addons-823099)   </devices>
	I0923 23:38:23.634064   15521 main.go:141] libmachine: (addons-823099) </domain>
	I0923 23:38:23.634068   15521 main.go:141] libmachine: (addons-823099) 
	I0923 23:38:23.640809   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:76:74:e7 in network default
	I0923 23:38:23.641513   15521 main.go:141] libmachine: (addons-823099) Ensuring networks are active...
	I0923 23:38:23.641533   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:23.642154   15521 main.go:141] libmachine: (addons-823099) Ensuring network default is active
	I0923 23:38:23.642583   15521 main.go:141] libmachine: (addons-823099) Ensuring network mk-addons-823099 is active
	I0923 23:38:23.643027   15521 main.go:141] libmachine: (addons-823099) Getting domain xml...
	I0923 23:38:23.643677   15521 main.go:141] libmachine: (addons-823099) Creating domain...
	I0923 23:38:25.091232   15521 main.go:141] libmachine: (addons-823099) Waiting to get IP...
	I0923 23:38:25.092030   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.092547   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.092567   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.092528   15543 retry.go:31] will retry after 241.454266ms: waiting for machine to come up
	I0923 23:38:25.337249   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.337719   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.337739   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.337668   15543 retry.go:31] will retry after 317.338732ms: waiting for machine to come up
	I0923 23:38:25.656076   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.656565   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.656591   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.656511   15543 retry.go:31] will retry after 326.274636ms: waiting for machine to come up
	I0923 23:38:25.984000   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:25.984436   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:25.984458   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:25.984397   15543 retry.go:31] will retry after 437.832088ms: waiting for machine to come up
	I0923 23:38:26.424106   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:26.424634   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:26.424656   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:26.424551   15543 retry.go:31] will retry after 668.976748ms: waiting for machine to come up
	I0923 23:38:27.095408   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:27.095943   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:27.095968   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:27.095910   15543 retry.go:31] will retry after 748.393255ms: waiting for machine to come up
	I0923 23:38:27.845915   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:27.846277   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:27.846348   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:27.846252   15543 retry.go:31] will retry after 761.156246ms: waiting for machine to come up
	I0923 23:38:28.608811   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:28.609268   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:28.609298   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:28.609221   15543 retry.go:31] will retry after 1.011775453s: waiting for machine to come up
	I0923 23:38:29.622384   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:29.622840   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:29.622873   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:29.622758   15543 retry.go:31] will retry after 1.842457552s: waiting for machine to come up
	I0923 23:38:31.467098   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:31.467569   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:31.467589   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:31.467500   15543 retry.go:31] will retry after 1.843110258s: waiting for machine to come up
	I0923 23:38:33.312780   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:33.313247   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:33.313274   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:33.313210   15543 retry.go:31] will retry after 1.888655031s: waiting for machine to come up
	I0923 23:38:35.204154   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:35.204555   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:35.204580   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:35.204514   15543 retry.go:31] will retry after 2.870740222s: waiting for machine to come up
	I0923 23:38:38.077027   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:38.077558   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:38.077587   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:38.077506   15543 retry.go:31] will retry after 3.119042526s: waiting for machine to come up
	I0923 23:38:41.200776   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:41.201175   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find current IP address of domain addons-823099 in network mk-addons-823099
	I0923 23:38:41.201216   15521 main.go:141] libmachine: (addons-823099) DBG | I0923 23:38:41.201127   15543 retry.go:31] will retry after 3.936049816s: waiting for machine to come up
	I0923 23:38:45.138385   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.138867   15521 main.go:141] libmachine: (addons-823099) Found IP for machine: 192.168.39.29
	I0923 23:38:45.138888   15521 main.go:141] libmachine: (addons-823099) Reserving static IP address...
	I0923 23:38:45.138902   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has current primary IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.139282   15521 main.go:141] libmachine: (addons-823099) DBG | unable to find host DHCP lease matching {name: "addons-823099", mac: "52:54:00:15:a7:77", ip: "192.168.39.29"} in network mk-addons-823099
	I0923 23:38:45.213621   15521 main.go:141] libmachine: (addons-823099) Reserved static IP address: 192.168.39.29
	I0923 23:38:45.213668   15521 main.go:141] libmachine: (addons-823099) DBG | Getting to WaitForSSH function...
	I0923 23:38:45.213678   15521 main.go:141] libmachine: (addons-823099) Waiting for SSH to be available...
	I0923 23:38:45.215779   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.216179   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.216202   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.216401   15521 main.go:141] libmachine: (addons-823099) DBG | Using SSH client type: external
	I0923 23:38:45.216423   15521 main.go:141] libmachine: (addons-823099) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa (-rw-------)
	I0923 23:38:45.216459   15521 main.go:141] libmachine: (addons-823099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 23:38:45.216477   15521 main.go:141] libmachine: (addons-823099) DBG | About to run SSH command:
	I0923 23:38:45.216493   15521 main.go:141] libmachine: (addons-823099) DBG | exit 0
	I0923 23:38:45.348718   15521 main.go:141] libmachine: (addons-823099) DBG | SSH cmd err, output: <nil>: 
	I0923 23:38:45.349048   15521 main.go:141] libmachine: (addons-823099) KVM machine creation complete!
	I0923 23:38:45.349355   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:45.350006   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:45.350193   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:45.350362   15521 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 23:38:45.350380   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:38:45.351912   15521 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 23:38:45.351931   15521 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 23:38:45.351940   15521 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 23:38:45.351949   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.354650   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.355037   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.355057   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.355224   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.355434   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.355578   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.355729   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.355866   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.356038   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.356049   15521 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 23:38:45.463579   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:38:45.463613   15521 main.go:141] libmachine: Detecting the provisioner...
	I0923 23:38:45.463626   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.466205   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.466613   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.466660   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.466829   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.466991   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.467178   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.467465   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.467645   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.467822   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.467833   15521 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 23:38:45.576852   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 23:38:45.576941   15521 main.go:141] libmachine: found compatible host: buildroot
	I0923 23:38:45.576956   15521 main.go:141] libmachine: Provisioning with buildroot...
	I0923 23:38:45.576964   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.577226   15521 buildroot.go:166] provisioning hostname "addons-823099"
	I0923 23:38:45.577248   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.577399   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.579859   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.580371   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.580404   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.580552   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.580721   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.580878   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.581030   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.581194   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.581377   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.581388   15521 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-823099 && echo "addons-823099" | sudo tee /etc/hostname
	I0923 23:38:45.702788   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-823099
	
	I0923 23:38:45.702814   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.706046   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.706466   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.706498   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.706674   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.706841   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.706992   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.707098   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.707259   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:45.707426   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:45.707442   15521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-823099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-823099/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-823099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 23:38:45.824404   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:38:45.824467   15521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0923 23:38:45.824483   15521 buildroot.go:174] setting up certificates
	I0923 23:38:45.824492   15521 provision.go:84] configureAuth start
	I0923 23:38:45.824500   15521 main.go:141] libmachine: (addons-823099) Calling .GetMachineName
	I0923 23:38:45.824784   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:45.827604   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.827981   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.828003   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.828166   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.830661   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.831054   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.831074   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.831227   15521 provision.go:143] copyHostCerts
	I0923 23:38:45.831320   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0923 23:38:45.831457   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0923 23:38:45.831538   15521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0923 23:38:45.831629   15521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.addons-823099 san=[127.0.0.1 192.168.39.29 addons-823099 localhost minikube]
	I0923 23:38:45.920692   15521 provision.go:177] copyRemoteCerts
	I0923 23:38:45.920769   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 23:38:45.920791   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:45.923583   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.923986   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:45.924002   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:45.924356   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:45.924566   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:45.924832   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:45.924985   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.010588   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 23:38:46.034096   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 23:38:46.056758   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 23:38:46.081040   15521 provision.go:87] duration metric: took 256.535012ms to configureAuth
	I0923 23:38:46.081074   15521 buildroot.go:189] setting minikube options for container-runtime
	I0923 23:38:46.081315   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:38:46.081416   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.084885   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.085669   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.085696   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.086110   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.086464   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.086680   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.086852   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.087064   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:46.087258   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:46.087278   15521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 23:38:46.317743   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 23:38:46.317769   15521 main.go:141] libmachine: Checking connection to Docker...
	I0923 23:38:46.317777   15521 main.go:141] libmachine: (addons-823099) Calling .GetURL
	I0923 23:38:46.319030   15521 main.go:141] libmachine: (addons-823099) DBG | Using libvirt version 6000000
	I0923 23:38:46.321409   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.321779   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.321804   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.321996   15521 main.go:141] libmachine: Docker is up and running!
	I0923 23:38:46.322104   15521 main.go:141] libmachine: Reticulating splines...
	I0923 23:38:46.322116   15521 client.go:171] duration metric: took 23.37599828s to LocalClient.Create
	I0923 23:38:46.322150   15521 start.go:167] duration metric: took 23.376076398s to libmachine.API.Create "addons-823099"
	I0923 23:38:46.322166   15521 start.go:293] postStartSetup for "addons-823099" (driver="kvm2")
	I0923 23:38:46.322180   15521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:38:46.322208   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.322508   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:38:46.322578   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.324896   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.325318   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.325337   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.325528   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.325723   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.325872   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.326059   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.410536   15521 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 23:38:46.414783   15521 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 23:38:46.414821   15521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0923 23:38:46.414912   15521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0923 23:38:46.414938   15521 start.go:296] duration metric: took 92.765547ms for postStartSetup
	I0923 23:38:46.414968   15521 main.go:141] libmachine: (addons-823099) Calling .GetConfigRaw
	I0923 23:38:46.415530   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:46.418325   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.418685   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.418723   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.418908   15521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/config.json ...
	I0923 23:38:46.419089   15521 start.go:128] duration metric: took 23.492942575s to createHost
	I0923 23:38:46.419111   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.421225   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.421516   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.421547   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.421645   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.421824   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.421967   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.422177   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.422321   15521 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:46.422531   15521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0923 23:38:46.422544   15521 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 23:38:46.533050   15521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727134726.509696447
	
	I0923 23:38:46.533076   15521 fix.go:216] guest clock: 1727134726.509696447
	I0923 23:38:46.533086   15521 fix.go:229] Guest: 2024-09-23 23:38:46.509696447 +0000 UTC Remote: 2024-09-23 23:38:46.419100225 +0000 UTC m=+23.595027380 (delta=90.596222ms)
	I0923 23:38:46.533110   15521 fix.go:200] guest clock delta is within tolerance: 90.596222ms
	I0923 23:38:46.533117   15521 start.go:83] releasing machines lock for "addons-823099", held for 23.607112252s
	I0923 23:38:46.533143   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.533469   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:46.535967   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.536214   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.536242   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.536438   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.536933   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.537122   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:38:46.537236   15521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 23:38:46.537290   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.537326   15521 ssh_runner.go:195] Run: cat /version.json
	I0923 23:38:46.537344   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:38:46.540050   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540313   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540468   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.540495   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540659   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.540748   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:46.540775   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:46.540846   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.540921   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:38:46.540970   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.541076   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:38:46.541111   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.541201   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:38:46.541342   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:38:46.662512   15521 ssh_runner.go:195] Run: systemctl --version
	I0923 23:38:46.668932   15521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 23:38:46.827889   15521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 23:38:46.833604   15521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 23:38:46.833746   15521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:38:46.850062   15521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 23:38:46.850089   15521 start.go:495] detecting cgroup driver to use...
	I0923 23:38:46.850148   15521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 23:38:46.867425   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 23:38:46.882361   15521 docker.go:217] disabling cri-docker service (if available) ...
	I0923 23:38:46.882419   15521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 23:38:46.897323   15521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 23:38:46.911805   15521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 23:38:47.036999   15521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 23:38:47.203688   15521 docker.go:233] disabling docker service ...
	I0923 23:38:47.203767   15521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 23:38:47.219064   15521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 23:38:47.231715   15521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 23:38:47.365365   15521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 23:38:47.495284   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 23:38:47.508723   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:38:47.526801   15521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 23:38:47.526867   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.536943   15521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 23:38:47.537001   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.547198   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.557182   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.567529   15521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:38:47.578959   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.589877   15521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.608254   15521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:38:47.618495   15521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:38:47.627787   15521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 23:38:47.627862   15521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 23:38:47.640795   15521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:38:47.650160   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:47.773450   15521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 23:38:47.870212   15521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 23:38:47.870328   15521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 23:38:47.875329   15521 start.go:563] Will wait 60s for crictl version
	I0923 23:38:47.875422   15521 ssh_runner.go:195] Run: which crictl
	I0923 23:38:47.879286   15521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 23:38:47.916386   15521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 23:38:47.916536   15521 ssh_runner.go:195] Run: crio --version
	I0923 23:38:47.943232   15521 ssh_runner.go:195] Run: crio --version
	I0923 23:38:47.973111   15521 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 23:38:47.974418   15521 main.go:141] libmachine: (addons-823099) Calling .GetIP
	I0923 23:38:47.977389   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:47.977726   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:38:47.977771   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:38:47.977950   15521 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 23:38:47.982681   15521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:47.995735   15521 kubeadm.go:883] updating cluster {Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:38:47.995872   15521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:47.995937   15521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:38:48.026187   15521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 23:38:48.026255   15521 ssh_runner.go:195] Run: which lz4
	I0923 23:38:48.029934   15521 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 23:38:48.033681   15521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 23:38:48.033709   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 23:38:49.244831   15521 crio.go:462] duration metric: took 1.21491674s to copy over tarball
	I0923 23:38:49.244910   15521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 23:38:51.408420   15521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.163482226s)
	I0923 23:38:51.408450   15521 crio.go:469] duration metric: took 2.163580195s to extract the tarball
	I0923 23:38:51.408457   15521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 23:38:51.445104   15521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:38:51.484376   15521 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 23:38:51.484401   15521 cache_images.go:84] Images are preloaded, skipping loading
	I0923 23:38:51.484409   15521 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.31.1 crio true true} ...
	I0923 23:38:51.484499   15521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-823099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 23:38:51.484557   15521 ssh_runner.go:195] Run: crio config
	I0923 23:38:51.538806   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:38:51.538828   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:38:51.538838   15521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:38:51.538859   15521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-823099 NodeName:addons-823099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:38:51.538985   15521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-823099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:38:51.539038   15521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:38:51.548496   15521 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 23:38:51.548563   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 23:38:51.557551   15521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 23:38:51.574810   15521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:38:51.590461   15521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0923 23:38:51.605904   15521 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0923 23:38:51.609379   15521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:51.620067   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:51.746991   15521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:38:51.764430   15521 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099 for IP: 192.168.39.29
	I0923 23:38:51.764452   15521 certs.go:194] generating shared ca certs ...
	I0923 23:38:51.764479   15521 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.764627   15521 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0923 23:38:51.827925   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt ...
	I0923 23:38:51.827961   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt: {Name:mk7bce46408bad28fa4c4ad82afe9d6bd10e26b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.828169   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key ...
	I0923 23:38:51.828185   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key: {Name:mkfd724d8b1e5c4e28f581332eb148d4cdbcd3bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.828303   15521 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0923 23:38:51.937978   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt ...
	I0923 23:38:51.938011   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt: {Name:mka59daefa132c631d082c68c6d4bee6c31dbed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.938201   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key ...
	I0923 23:38:51.938214   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key: {Name:mk74fd28ca9ebe05bacfd634b928864a1a7ce292 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:51.938314   15521 certs.go:256] generating profile certs ...
	I0923 23:38:51.938367   15521 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key
	I0923 23:38:51.938381   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt with IP's: []
	I0923 23:38:52.195361   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt ...
	I0923 23:38:52.195393   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: {Name:mkf53b392cc89a16e12244564032d9b45154080d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.195578   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key ...
	I0923 23:38:52.195591   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.key: {Name:mk9b41db6a73a405e689e669580e343c2766a447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.195711   15521 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9
	I0923 23:38:52.195731   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.29]
	I0923 23:38:52.295200   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 ...
	I0923 23:38:52.295231   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9: {Name:mkae17567f7ac3bcae8f339aebdd9969213784de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.295413   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9 ...
	I0923 23:38:52.295433   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9: {Name:mk496cd6f593f9c72852d6a78b567d84d704b066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.295528   15521 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt.7600cdb9 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt
	I0923 23:38:52.295617   15521 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key.7600cdb9 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key
	I0923 23:38:52.295677   15521 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key
	I0923 23:38:52.295695   15521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt with IP's: []
	I0923 23:38:52.353357   15521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt ...
	I0923 23:38:52.353388   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt: {Name:mke38bbbfeef7cd2c66dad6779df3ba32d8b0e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.353569   15521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key ...
	I0923 23:38:52.353582   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key: {Name:mka62603d541b89ee9d7c4fc26d23c4522e47be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:52.353765   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 23:38:52.353806   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0923 23:38:52.353833   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:38:52.353855   15521 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0923 23:38:52.354427   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:38:52.379337   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 23:38:52.400882   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:38:52.424525   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 23:38:52.450323   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 23:38:52.477687   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 23:38:52.499751   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:38:52.521727   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 23:38:52.543557   15521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:38:52.565278   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:38:52.581109   15521 ssh_runner.go:195] Run: openssl version
	I0923 23:38:52.586569   15521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:38:52.596572   15521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.600599   15521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.600654   15521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:52.606001   15521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:38:52.615760   15521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:38:52.619451   15521 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:38:52.619508   15521 kubeadm.go:392] StartCluster: {Name:addons-823099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-823099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:52.619583   15521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 23:38:52.620006   15521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 23:38:52.654320   15521 cri.go:89] found id: ""
	I0923 23:38:52.654386   15521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:38:52.663817   15521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:38:52.673074   15521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:38:52.681948   15521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:38:52.681974   15521 kubeadm.go:157] found existing configuration files:
	
	I0923 23:38:52.682026   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:38:52.690360   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:38:52.690418   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:38:52.698969   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:38:52.707269   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:38:52.707357   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:38:52.716380   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:38:52.725235   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:38:52.725319   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:38:52.734575   15521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:38:52.743504   15521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:38:52.743572   15521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:38:52.752994   15521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 23:38:52.803786   15521 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:38:52.803907   15521 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:38:52.902853   15521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:38:52.903001   15521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:38:52.903126   15521 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:38:52.909824   15521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:38:52.911676   15521 out.go:235]   - Generating certificates and keys ...
	I0923 23:38:52.912753   15521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:38:52.912873   15521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:38:53.248886   15521 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:38:53.341826   15521 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:38:53.485454   15521 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:38:53.623967   15521 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:38:53.679532   15521 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:38:53.679721   15521 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-823099 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0923 23:38:53.905840   15521 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 23:38:53.906024   15521 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-823099 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0923 23:38:54.051813   15521 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 23:38:54.395310   15521 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 23:38:54.735052   15521 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 23:38:54.735299   15521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 23:38:54.847419   15521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 23:38:54.936586   15521 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 23:38:55.060632   15521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 23:38:55.214060   15521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 23:38:55.303678   15521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 23:38:55.304286   15521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 23:38:55.306790   15521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 23:38:55.308801   15521 out.go:235]   - Booting up control plane ...
	I0923 23:38:55.308940   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 23:38:55.309057   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 23:38:55.309138   15521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 23:38:55.324842   15521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 23:38:55.330701   15521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 23:38:55.330768   15521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 23:38:55.470043   15521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 23:38:55.470158   15521 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 23:38:56.470778   15521 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001582152s
	I0923 23:38:56.470872   15521 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 23:39:01.969265   15521 kubeadm.go:310] [api-check] The API server is healthy after 5.501475075s
	I0923 23:39:01.981867   15521 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 23:39:02.004452   15521 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 23:39:02.039983   15521 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 23:39:02.040235   15521 kubeadm.go:310] [mark-control-plane] Marking the node addons-823099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 23:39:02.057479   15521 kubeadm.go:310] [bootstrap-token] Using token: fyz7kl.eyjwn42xmcr354pj
	I0923 23:39:02.059006   15521 out.go:235]   - Configuring RBAC rules ...
	I0923 23:39:02.059157   15521 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 23:39:02.076960   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 23:39:02.086257   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 23:39:02.092000   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 23:39:02.096548   15521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 23:39:02.102638   15521 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 23:39:02.377281   15521 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 23:39:02.807346   15521 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 23:39:03.376529   15521 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 23:39:03.377848   15521 kubeadm.go:310] 
	I0923 23:39:03.377926   15521 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 23:39:03.377937   15521 kubeadm.go:310] 
	I0923 23:39:03.378021   15521 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 23:39:03.378030   15521 kubeadm.go:310] 
	I0923 23:39:03.378058   15521 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 23:39:03.378126   15521 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 23:39:03.378208   15521 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 23:39:03.378228   15521 kubeadm.go:310] 
	I0923 23:39:03.378321   15521 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 23:39:03.378330   15521 kubeadm.go:310] 
	I0923 23:39:03.378390   15521 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 23:39:03.378400   15521 kubeadm.go:310] 
	I0923 23:39:03.378499   15521 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 23:39:03.378600   15521 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 23:39:03.378669   15521 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 23:39:03.378680   15521 kubeadm.go:310] 
	I0923 23:39:03.378788   15521 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 23:39:03.378897   15521 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 23:39:03.378907   15521 kubeadm.go:310] 
	I0923 23:39:03.378995   15521 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fyz7kl.eyjwn42xmcr354pj \
	I0923 23:39:03.379107   15521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0923 23:39:03.379129   15521 kubeadm.go:310] 	--control-plane 
	I0923 23:39:03.379133   15521 kubeadm.go:310] 
	I0923 23:39:03.379245   15521 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 23:39:03.379266   15521 kubeadm.go:310] 
	I0923 23:39:03.379389   15521 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fyz7kl.eyjwn42xmcr354pj \
	I0923 23:39:03.379523   15521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0923 23:39:03.380043   15521 kubeadm.go:310] W0923 23:38:52.785015     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:39:03.380394   15521 kubeadm.go:310] W0923 23:38:52.785716     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:39:03.380489   15521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 23:39:03.380508   15521 cni.go:84] Creating CNI manager for ""
	I0923 23:39:03.380560   15521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:39:03.383452   15521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 23:39:03.384682   15521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 23:39:03.397094   15521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 23:39:03.417722   15521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 23:39:03.417811   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:03.417847   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-823099 minikube.k8s.io/updated_at=2024_09_23T23_39_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=addons-823099 minikube.k8s.io/primary=true
	I0923 23:39:03.459069   15521 ops.go:34] apiserver oom_adj: -16
	I0923 23:39:03.574741   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.075852   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.575549   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.075536   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.574791   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.075455   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.575226   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.075498   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.575490   15521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:07.682573   15521 kubeadm.go:1113] duration metric: took 4.264822927s to wait for elevateKubeSystemPrivileges
	I0923 23:39:07.682604   15521 kubeadm.go:394] duration metric: took 15.063102314s to StartCluster
	I0923 23:39:07.682621   15521 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:07.682743   15521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:39:07.683441   15521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:07.683700   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 23:39:07.683729   15521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:39:07.683777   15521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 23:39:07.683896   15521 addons.go:69] Setting yakd=true in profile "addons-823099"
	I0923 23:39:07.683906   15521 addons.go:69] Setting default-storageclass=true in profile "addons-823099"
	I0923 23:39:07.683910   15521 addons.go:69] Setting cloud-spanner=true in profile "addons-823099"
	I0923 23:39:07.683926   15521 addons.go:69] Setting registry=true in profile "addons-823099"
	I0923 23:39:07.683932   15521 addons.go:234] Setting addon cloud-spanner=true in "addons-823099"
	I0923 23:39:07.683939   15521 addons.go:234] Setting addon registry=true in "addons-823099"
	I0923 23:39:07.683937   15521 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-823099"
	I0923 23:39:07.683936   15521 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-823099"
	I0923 23:39:07.683953   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:39:07.683968   15521 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-823099"
	I0923 23:39:07.683981   15521 addons.go:69] Setting storage-provisioner=true in profile "addons-823099"
	I0923 23:39:07.683982   15521 addons.go:69] Setting ingress=true in profile "addons-823099"
	I0923 23:39:07.683983   15521 addons.go:69] Setting gcp-auth=true in profile "addons-823099"
	I0923 23:39:07.683992   15521 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-823099"
	I0923 23:39:07.684000   15521 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-823099"
	I0923 23:39:07.684003   15521 addons.go:69] Setting inspektor-gadget=true in profile "addons-823099"
	I0923 23:39:07.684006   15521 addons.go:69] Setting volcano=true in profile "addons-823099"
	I0923 23:39:07.684009   15521 mustload.go:65] Loading cluster: addons-823099
	I0923 23:39:07.684014   15521 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-823099"
	I0923 23:39:07.683928   15521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-823099"
	I0923 23:39:07.683992   15521 addons.go:234] Setting addon storage-provisioner=true in "addons-823099"
	I0923 23:39:07.684124   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684015   15521 addons.go:234] Setting addon inspektor-gadget=true in "addons-823099"
	I0923 23:39:07.684199   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684214   15521 config.go:182] Loaded profile config "addons-823099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:39:07.683970   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684535   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684572   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684595   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684622   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684005   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.683959   15521 addons.go:69] Setting ingress-dns=true in profile "addons-823099"
	I0923 23:39:07.684654   15521 addons.go:234] Setting addon ingress-dns=true in "addons-823099"
	I0923 23:39:07.684657   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684690   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684716   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684747   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.683970   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.683918   15521 addons.go:234] Setting addon yakd=true in "addons-823099"
	I0923 23:39:07.684807   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.685044   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685062   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685073   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685093   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685134   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.683995   15521 addons.go:234] Setting addon ingress=true in "addons-823099"
	I0923 23:39:07.685161   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685181   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684602   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.684017   15521 addons.go:69] Setting metrics-server=true in profile "addons-823099"
	I0923 23:39:07.685232   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685242   15521 addons.go:234] Setting addon metrics-server=true in "addons-823099"
	I0923 23:39:07.685264   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684024   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.685615   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685642   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685796   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.685854   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.685977   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.686016   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684027   15521 addons.go:69] Setting volumesnapshots=true in profile "addons-823099"
	I0923 23:39:07.686371   15521 addons.go:234] Setting addon volumesnapshots=true in "addons-823099"
	I0923 23:39:07.686398   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.684018   15521 addons.go:234] Setting addon volcano=true in "addons-823099"
	I0923 23:39:07.686673   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.686498   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.686778   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.684634   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.687024   15521 out.go:177] * Verifying Kubernetes components...
	I0923 23:39:07.688506   15521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:39:07.703440   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0923 23:39:07.705810   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0923 23:39:07.708733   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.708779   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.709090   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0923 23:39:07.709229   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.709266   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.709595   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.709629   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.713224   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713355   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713390   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.713862   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.713881   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.714302   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.714377   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.714392   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.714451   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.714464   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.715015   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.715037   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.715432   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.715475   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.715787   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.716507   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.719153   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.719542   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.719578   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.720949   15521 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-823099"
	I0923 23:39:07.720998   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.721386   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.721432   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.735627   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0923 23:39:07.736277   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.736638   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0923 23:39:07.737105   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.737122   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.737510   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.738081   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.738098   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.738156   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0923 23:39:07.739268   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.739318   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.739918   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.739959   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.740211   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.740321   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45651
	I0923 23:39:07.740861   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.740881   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.740953   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.740993   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.741352   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.741901   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.741947   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.742154   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.742613   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.742628   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.743023   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.743085   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0923 23:39:07.743569   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.743610   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.746643   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.747874   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.747903   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.748324   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.748466   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0923 23:39:07.748965   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.749004   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.749096   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.749726   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.749746   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.750196   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.750719   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.750754   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.758701   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0923 23:39:07.759243   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.759784   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.759805   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.760206   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.760261   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0923 23:39:07.761129   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.761175   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.761441   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33217
	I0923 23:39:07.761985   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.762828   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.762847   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.763324   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.763665   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.765500   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0923 23:39:07.765573   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.766125   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.766145   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.766801   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.766864   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0923 23:39:07.767084   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.767500   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.768285   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.768301   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.768446   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.768843   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.768866   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.768932   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.769275   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.769821   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.769867   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.771258   15521 addons.go:234] Setting addon default-storageclass=true in "addons-823099"
	I0923 23:39:07.771300   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:07.771655   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.771687   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.771922   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0923 23:39:07.772228   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.772255   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.774448   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0923 23:39:07.780977   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.781565   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.781590   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.781920   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.782058   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.783913   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.785056   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0923 23:39:07.785575   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0923 23:39:07.785629   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.786110   15521 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 23:39:07.786146   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.786320   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.786334   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.786772   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.787011   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.788584   15521 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 23:39:07.789007   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.789550   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.789568   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.789756   15521 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 23:39:07.789773   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 23:39:07.789788   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.790146   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.790662   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:07.791941   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:07.793680   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.793727   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.793998   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0923 23:39:07.794008   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.794031   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0923 23:39:07.794471   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.794493   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.794701   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.794875   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.794877   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 23:39:07.794982   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.795069   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.796623   15521 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:39:07.796643   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 23:39:07.796662   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.798956   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I0923 23:39:07.799731   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.800110   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.800142   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.800477   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.800553   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36347
	I0923 23:39:07.801546   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801641   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801654   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.801712   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.801839   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.801899   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.802076   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802095   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802220   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802235   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802360   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.802376   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.802425   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802551   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.802641   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38501
	I0923 23:39:07.802785   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802803   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.802788   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.802965   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.803026   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.803767   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.803787   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.803854   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.804090   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.804743   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.804784   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.805129   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.805147   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.805173   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.805248   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.805498   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.805769   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.806098   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.806118   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.806136   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.806514   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.807108   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.806545   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.806630   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45039
	I0923 23:39:07.807369   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.807510   15521 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 23:39:07.808434   15521 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 23:39:07.808504   15521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 23:39:07.809038   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.809300   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:07.809332   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:07.809348   15521 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:39:07.809359   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 23:39:07.809376   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.809986   15521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:07.810003   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 23:39:07.810016   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.810062   15521 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 23:39:07.810069   15521 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 23:39:07.810078   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.811006   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:07.811042   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:07.811050   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:07.811145   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:07.811156   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:07.811952   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.812979   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.812997   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.813331   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:07.813347   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	W0923 23:39:07.813447   15521 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 23:39:07.813946   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.814117   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0923 23:39:07.814661   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.814885   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.815227   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.815248   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.815430   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.815545   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.815727   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.816076   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.816315   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.816316   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.817096   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.817135   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.817285   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.817306   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.817432   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.817458   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.817467   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.817640   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.817797   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.818443   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:07.818475   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:07.818854   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.818916   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.818935   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.819103   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.819327   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.819449   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.819556   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.819704   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.820144   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0923 23:39:07.821232   15521 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 23:39:07.822387   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 23:39:07.822407   15521 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 23:39:07.822426   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.823519   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.824593   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.824617   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.825182   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.825425   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.826173   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.826824   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.826852   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.827033   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.827202   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.827342   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.827473   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.833317   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.834549   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0923 23:39:07.834702   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I0923 23:39:07.835036   15521 out.go:177]   - Using image docker.io/busybox:stable
	I0923 23:39:07.835302   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.835304   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.835362   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0923 23:39:07.835997   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.836020   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.836421   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.836527   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.836860   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.837187   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.837204   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.837294   15521 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 23:39:07.837615   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0923 23:39:07.837726   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.838168   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.838186   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.838238   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.838278   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.838430   15521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:07.838454   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 23:39:07.838486   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.838837   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.838942   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.838956   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.839318   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.839611   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.840065   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.840126   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.840224   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.840573   15521 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 23:39:07.841432   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.841867   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 23:39:07.841976   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 23:39:07.841989   15521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 23:39:07.842007   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.843249   15521 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 23:39:07.843258   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 23:39:07.843274   15521 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 23:39:07.843293   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.843539   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.844019   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.844044   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.844276   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.844626   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.844835   15521 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:07.844851   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 23:39:07.844867   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.844970   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.845115   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.845546   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.847193   15521 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 23:39:07.847689   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848226   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848362   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.848385   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.848554   15521 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:07.848568   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 23:39:07.848584   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.849188   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849248   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849270   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.849286   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849314   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849370   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.849384   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.849407   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849452   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.849490   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.849597   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849640   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.849646   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.849718   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.849850   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.850150   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.850314   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.851886   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.852209   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.852227   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.852511   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.852685   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.852836   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.852856   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I0923 23:39:07.853005   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.853307   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.853831   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.853845   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.854157   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.854337   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.854969   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0923 23:39:07.855341   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:07.855829   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:07.855846   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:07.855913   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.856203   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:07.856410   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:07.857704   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 23:39:07.857995   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:07.858210   15521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:07.858230   15521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 23:39:07.858247   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.859971   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 23:39:07.860879   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.861260   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.861284   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.861453   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.861596   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.861697   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.861858   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:07.862305   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 23:39:07.863581   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 23:39:07.864972   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 23:39:07.866055   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 23:39:07.867358   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 23:39:07.868993   15521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 23:39:07.870321   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 23:39:07.870349   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 23:39:07.870377   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:07.873724   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.874117   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:07.874148   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:07.874293   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:07.874468   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:07.874636   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:07.874743   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	W0923 23:39:07.876787   15521 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52054->192.168.39.29:22: read: connection reset by peer
	I0923 23:39:07.876819   15521 retry.go:31] will retry after 325.765673ms: ssh: handshake failed: read tcp 192.168.39.1:52054->192.168.39.29:22: read: connection reset by peer
	I0923 23:39:08.112607   15521 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 23:39:08.112629   15521 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 23:39:08.174341   15521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:39:08.174422   15521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 23:39:08.189231   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:08.223406   15521 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:08.223436   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 23:39:08.238226   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 23:39:08.238253   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 23:39:08.286222   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:08.286427   15521 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 23:39:08.286456   15521 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 23:39:08.293938   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:08.304026   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:39:08.304633   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:39:08.357785   15521 node_ready.go:35] waiting up to 6m0s for node "addons-823099" to be "Ready" ...
	I0923 23:39:08.361610   15521 node_ready.go:49] node "addons-823099" has status "Ready":"True"
	I0923 23:39:08.361634   15521 node_ready.go:38] duration metric: took 3.816238ms for node "addons-823099" to be "Ready" ...
	I0923 23:39:08.361643   15521 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:08.370384   15521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:08.389666   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 23:39:08.389694   15521 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 23:39:08.393171   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 23:39:08.393188   15521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 23:39:08.414092   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:08.415846   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:08.424751   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:08.462715   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 23:39:08.462737   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 23:39:08.507754   15521 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 23:39:08.507783   15521 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 23:39:08.593622   15521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:08.593654   15521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 23:39:08.629405   15521 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 23:39:08.629437   15521 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 23:39:08.632087   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 23:39:08.632113   15521 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 23:39:08.661201   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 23:39:08.661224   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 23:39:08.691801   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 23:39:08.691827   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 23:39:08.714253   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:08.819060   15521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 23:39:08.819096   15521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 23:39:08.831081   15521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 23:39:08.831110   15521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 23:39:08.886522   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 23:39:08.886559   15521 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 23:39:09.009250   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 23:39:09.009293   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 23:39:09.046881   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 23:39:09.046906   15521 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 23:39:09.157084   15521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 23:39:09.157109   15521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 23:39:09.166062   15521 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:09.166097   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 23:39:09.267085   15521 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:09.267116   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 23:39:09.292567   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 23:39:09.292607   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 23:39:09.429637   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:09.445286   15521 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 23:39:09.445326   15521 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 23:39:09.492474   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 23:39:09.492516   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 23:39:09.565613   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:09.721455   15521 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:09.721493   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 23:39:09.840988   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:09.948899   15521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 23:39:09.948926   15521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 23:39:10.140834   15521 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.966375459s)
	I0923 23:39:10.140875   15521 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 23:39:10.141396   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.952129655s)
	I0923 23:39:10.141443   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:10.142827   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:10.143945   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:10.143972   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:10.143992   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:10.144008   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:10.144020   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:10.144388   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:10.144424   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:10.144431   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:10.281273   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 23:39:10.281305   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 23:39:10.378453   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:10.646247   15521 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-823099" context rescaled to 1 replicas
	I0923 23:39:10.659756   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 23:39:10.659783   15521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 23:39:10.917202   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 23:39:10.917226   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 23:39:11.091159   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 23:39:11.091181   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 23:39:11.170283   15521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:11.170310   15521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 23:39:11.230097   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:12.257220   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.970955837s)
	I0923 23:39:12.257279   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.257296   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.257605   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.257667   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.257688   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.257702   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.257712   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.257950   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.257978   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.257992   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.474315   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:12.579345   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.28537252s)
	I0923 23:39:12.579401   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579415   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579416   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.275358792s)
	I0923 23:39:12.579452   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579468   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579812   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.579813   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.579872   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.579881   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579827   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.579909   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.579927   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.579941   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.579841   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.579889   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.580178   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.580190   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.580247   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.580256   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:12.580271   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.695053   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:12.695077   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:12.695384   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:12.695434   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:12.695455   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:14.801414   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 23:39:14.801451   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:14.804720   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:14.805099   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:14.805139   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:14.805316   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:14.805553   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:14.805707   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:14.805897   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:14.982300   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:15.080173   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.775510965s)
	I0923 23:39:15.080238   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080251   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080246   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.666114058s)
	I0923 23:39:15.080267   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.664395261s)
	I0923 23:39:15.080284   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080302   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080304   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080351   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.655576289s)
	I0923 23:39:15.080364   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080367   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080450   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080463   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.366181047s)
	I0923 23:39:15.080486   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080496   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.080565   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.650891016s)
	I0923 23:39:15.080647   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.080661   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082553   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082564   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082580   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082585   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082594   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082604   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082584   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082626   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082636   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082647   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082655   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082668   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082674   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082611   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082691   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082687   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082680   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.082659   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082718   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082722   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082726   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082731   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082708   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082741   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.082749   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.082757   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082764   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082766   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.082735   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.082783   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.083277   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083295   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083312   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083337   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083354   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083406   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083372   15521 addons.go:475] Verifying addon ingress=true in "addons-823099"
	I0923 23:39:15.083518   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083528   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.083746   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.083771   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.083777   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084317   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.084354   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.084376   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.084382   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084390   15521 addons.go:475] Verifying addon metrics-server=true in "addons-823099"
	I0923 23:39:15.084467   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.084473   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.084479   15521 addons.go:475] Verifying addon registry=true in "addons-823099"
	I0923 23:39:15.085995   15521 out.go:177] * Verifying registry addon...
	I0923 23:39:15.086007   15521 out.go:177] * Verifying ingress addon...
	I0923 23:39:15.085999   15521 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-823099 service yakd-dashboard -n yakd-dashboard
	
	I0923 23:39:15.088530   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 23:39:15.088530   15521 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 23:39:15.123892   15521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 23:39:15.148951   15521 addons.go:234] Setting addon gcp-auth=true in "addons-823099"
	I0923 23:39:15.149022   15521 host.go:66] Checking if "addons-823099" exists ...
	I0923 23:39:15.149444   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:15.149498   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:15.156748   15521 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 23:39:15.156776   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.156871   15521 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 23:39:15.156894   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.165454   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0923 23:39:15.166065   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:15.166623   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:15.166651   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:15.167013   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:15.167737   15521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:39:15.167785   15521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:39:15.183598   15521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0923 23:39:15.184008   15521 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:39:15.184531   15521 main.go:141] libmachine: Using API Version  1
	I0923 23:39:15.184550   15521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:39:15.184913   15521 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:39:15.185133   15521 main.go:141] libmachine: (addons-823099) Calling .GetState
	I0923 23:39:15.186845   15521 main.go:141] libmachine: (addons-823099) Calling .DriverName
	I0923 23:39:15.187076   15521 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 23:39:15.187097   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHHostname
	I0923 23:39:15.190490   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:15.190909   15521 main.go:141] libmachine: (addons-823099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:77", ip: ""} in network mk-addons-823099: {Iface:virbr1 ExpiryTime:2024-09-24 00:38:37 +0000 UTC Type:0 Mac:52:54:00:15:a7:77 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:addons-823099 Clientid:01:52:54:00:15:a7:77}
	I0923 23:39:15.190948   15521 main.go:141] libmachine: (addons-823099) DBG | domain addons-823099 has defined IP address 192.168.39.29 and MAC address 52:54:00:15:a7:77 in network mk-addons-823099
	I0923 23:39:15.191144   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHPort
	I0923 23:39:15.191345   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHKeyPath
	I0923 23:39:15.191625   15521 main.go:141] libmachine: (addons-823099) Calling .GetSSHUsername
	I0923 23:39:15.191841   15521 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/addons-823099/id_rsa Username:docker}
	I0923 23:39:15.290771   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.290792   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.291156   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.291204   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.291213   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.608008   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.608181   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.663866   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.098204567s)
	W0923 23:39:15.663915   15521 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:15.663942   15521 retry.go:31] will retry after 155.263016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:15.663943   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.822915237s)
	I0923 23:39:15.663986   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.663996   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.664271   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.664295   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.664306   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:15.664280   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:15.664315   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:15.664608   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:15.664630   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:15.820233   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:16.092842   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.094282   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:16.598768   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.599105   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.384250   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.386825   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.406922   15521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:17.409629   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.179468555s)
	I0923 23:39:17.409649   15521 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.222551947s)
	I0923 23:39:17.409675   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:17.409696   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:17.410005   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:17.410058   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:17.410074   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:17.410089   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:17.410101   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:17.410329   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:17.410346   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:17.410355   15521 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-823099"
	I0923 23:39:17.410358   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:17.411136   15521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:17.412024   15521 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 23:39:17.413560   15521 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 23:39:17.414261   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 23:39:17.414746   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 23:39:17.414766   15521 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 23:39:17.482533   15521 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 23:39:17.482556   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:17.512131   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 23:39:17.512159   15521 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 23:39:17.604150   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.604278   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.608747   15521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:17.608767   15521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 23:39:17.684509   15521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:17.918552   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.093404   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.096529   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.238589   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.418299415s)
	I0923 23:39:18.238642   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.238659   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.238975   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.238997   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.239004   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.239015   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.239024   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.239271   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.239324   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.239340   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.418978   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.601947   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.602098   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.821107   15521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.136556988s)
	I0923 23:39:18.821156   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.821172   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.821448   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.821469   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.821483   15521 main.go:141] libmachine: Making call to close driver server
	I0923 23:39:18.821490   15521 main.go:141] libmachine: (addons-823099) Calling .Close
	I0923 23:39:18.821766   15521 main.go:141] libmachine: Successfully made call to close driver server
	I0923 23:39:18.821781   15521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 23:39:18.821801   15521 main.go:141] libmachine: (addons-823099) DBG | Closing plugin on server side
	I0923 23:39:18.823766   15521 addons.go:475] Verifying addon gcp-auth=true in "addons-823099"
	I0923 23:39:18.825653   15521 out.go:177] * Verifying gcp-auth addon...
	I0923 23:39:18.828295   15521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 23:39:18.850143   15521 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 23:39:18.850163   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:18.920541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.100926   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.107040   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.336759   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:19.421467   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.593866   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.594253   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.832242   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:19.878336   15521 pod_ready.go:98] pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.29 HostIPs:[{IP:192.168.39.
29}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:39:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:39:12 +0000 UTC,FinishedAt:2024-09-23 23:39:18 +0000 UTC,ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327 Started:0xc00232d080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001f318d0} {Name:kube-api-access-ph5fc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001f318e0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:19.878379   15521 pod_ready.go:82] duration metric: took 11.507967304s for pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace to be "Ready" ...
	E0923 23:39:19.878394   15521 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-fmtkt" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.29 HostIPs:[{IP:192.168.39.29}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 23:39:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:39:12 +0000 UTC,FinishedAt:2024-09-23 23:39:18 +0000 UTC,ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://45a5b46a879fb0262594f44df0a2aaaf67ad594be72dad54881d4d2452524327 Started:0xc00232d080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001f318d0} {Name:kube-api-access-ph5fc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc001f318e0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:19.878408   15521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.884151   15521 pod_ready.go:93] pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.884174   15521 pod_ready.go:82] duration metric: took 5.758861ms for pod "coredns-7c65d6cfc9-h4m6q" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.884183   15521 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.891508   15521 pod_ready.go:93] pod "etcd-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.891551   15521 pod_ready.go:82] duration metric: took 7.346453ms for pod "etcd-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.891564   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.896566   15521 pod_ready.go:93] pod "kube-apiserver-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.896593   15521 pod_ready.go:82] duration metric: took 5.020816ms for pod "kube-apiserver-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.896609   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.912376   15521 pod_ready.go:93] pod "kube-controller-manager-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:19.912404   15521 pod_ready.go:82] duration metric: took 15.786797ms for pod "kube-controller-manager-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.912416   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgclm" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:19.923485   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.095418   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.098684   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.275218   15521 pod_ready.go:93] pod "kube-proxy-pgclm" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:20.275250   15521 pod_ready.go:82] duration metric: took 362.825273ms for pod "kube-proxy-pgclm" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.275263   15521 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.332146   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:20.419880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.593710   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.593992   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.675652   15521 pod_ready.go:93] pod "kube-scheduler-addons-823099" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:20.675690   15521 pod_ready.go:82] duration metric: took 400.417501ms for pod "kube-scheduler-addons-823099" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:20.675704   15521 pod_ready.go:39] duration metric: took 12.314050106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:20.675723   15521 api_server.go:52] waiting for apiserver process to appear ...
	I0923 23:39:20.675791   15521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:20.719710   15521 api_server.go:72] duration metric: took 13.035944288s to wait for apiserver process to appear ...
	I0923 23:39:20.719738   15521 api_server.go:88] waiting for apiserver healthz status ...
	I0923 23:39:20.719761   15521 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0923 23:39:20.724996   15521 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0923 23:39:20.726609   15521 api_server.go:141] control plane version: v1.31.1
	I0923 23:39:20.726632   15521 api_server.go:131] duration metric: took 6.887893ms to wait for apiserver health ...
	I0923 23:39:20.726640   15521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 23:39:20.832687   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:20.879847   15521 system_pods.go:59] 17 kube-system pods found
	I0923 23:39:20.879881   15521 system_pods.go:61] "coredns-7c65d6cfc9-h4m6q" [e5a66fda-ace2-434e-82fb-3d9d66fac49f] Running
	I0923 23:39:20.879892   15521 system_pods.go:61] "csi-hostpath-attacher-0" [ad0efe3a-8c72-46db-9ed8-35a46fba41f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:20.879897   15521 system_pods.go:61] "csi-hostpath-resizer-0" [e357dfe7-127b-4f18-90e3-beb7846c05cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:20.879906   15521 system_pods.go:61] "csi-hostpathplugin-l4gsf" [de45bd42-06e1-4387-ba3f-4d6a477b4823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:20.879911   15521 system_pods.go:61] "etcd-addons-823099" [c9add526-f518-4303-b016-3f95bd8c222a] Running
	I0923 23:39:20.879914   15521 system_pods.go:61] "kube-apiserver-addons-823099" [8788c6f4-114f-4c6c-928b-8ca58300c969] Running
	I0923 23:39:20.879918   15521 system_pods.go:61] "kube-controller-manager-addons-823099" [726e0154-67e9-4c92-9bac-b577104b0d12] Running
	I0923 23:39:20.879923   15521 system_pods.go:61] "kube-ingress-dns-minikube" [1194cadb-80b1-4fad-b99a-0afbc0be0b40] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 23:39:20.879926   15521 system_pods.go:61] "kube-proxy-pgclm" [3d47a25a-ab05-4197-975a-88bb7e1f9834] Running
	I0923 23:39:20.879929   15521 system_pods.go:61] "kube-scheduler-addons-823099" [193d28ff-87b2-4578-903c-e74dcea5c006] Running
	I0923 23:39:20.879939   15521 system_pods.go:61] "metrics-server-84c5f94fbc-gpzsm" [d5937c63-7f30-477a-a36e-e7e6cb8c64e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:20.879951   15521 system_pods.go:61] "nvidia-device-plugin-daemonset-2dqft" [c5e363a8-697b-4396-acf2-c41232b01445] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 23:39:20.879957   15521 system_pods.go:61] "registry-66c9cd494c-h5ntb" [67fc5fdd-03ae-44c9-8e43-0042bd142349] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:20.879964   15521 system_pods.go:61] "registry-proxy-dc579" [76bec57d-6868-4098-a291-8c38dda98afc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:20.879969   15521 system_pods.go:61] "snapshot-controller-56fcc65765-2lpn2" [6ea26c65-7a9a-4d74-af4b-8f23ecc36bab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:20.879974   15521 system_pods.go:61] "snapshot-controller-56fcc65765-9mcdf" [bc592ae3-b020-465c-b0e9-c739e2321360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:20.879980   15521 system_pods.go:61] "storage-provisioner" [25d0944a-e6b3-429b-bb81-22672fb100bd] Running
	I0923 23:39:20.879986   15521 system_pods.go:74] duration metric: took 153.340922ms to wait for pod list to return data ...
	I0923 23:39:20.879996   15521 default_sa.go:34] waiting for default service account to be created ...
	I0923 23:39:20.918654   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.075277   15521 default_sa.go:45] found service account: "default"
	I0923 23:39:21.075308   15521 default_sa.go:55] duration metric: took 195.307316ms for default service account to be created ...
	I0923 23:39:21.075318   15521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 23:39:21.093994   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.094405   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.281184   15521 system_pods.go:86] 17 kube-system pods found
	I0923 23:39:21.281221   15521 system_pods.go:89] "coredns-7c65d6cfc9-h4m6q" [e5a66fda-ace2-434e-82fb-3d9d66fac49f] Running
	I0923 23:39:21.281233   15521 system_pods.go:89] "csi-hostpath-attacher-0" [ad0efe3a-8c72-46db-9ed8-35a46fba41f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:21.281242   15521 system_pods.go:89] "csi-hostpath-resizer-0" [e357dfe7-127b-4f18-90e3-beb7846c05cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:21.281258   15521 system_pods.go:89] "csi-hostpathplugin-l4gsf" [de45bd42-06e1-4387-ba3f-4d6a477b4823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:21.281268   15521 system_pods.go:89] "etcd-addons-823099" [c9add526-f518-4303-b016-3f95bd8c222a] Running
	I0923 23:39:21.281274   15521 system_pods.go:89] "kube-apiserver-addons-823099" [8788c6f4-114f-4c6c-928b-8ca58300c969] Running
	I0923 23:39:21.281279   15521 system_pods.go:89] "kube-controller-manager-addons-823099" [726e0154-67e9-4c92-9bac-b577104b0d12] Running
	I0923 23:39:21.281288   15521 system_pods.go:89] "kube-ingress-dns-minikube" [1194cadb-80b1-4fad-b99a-0afbc0be0b40] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 23:39:21.281293   15521 system_pods.go:89] "kube-proxy-pgclm" [3d47a25a-ab05-4197-975a-88bb7e1f9834] Running
	I0923 23:39:21.281299   15521 system_pods.go:89] "kube-scheduler-addons-823099" [193d28ff-87b2-4578-903c-e74dcea5c006] Running
	I0923 23:39:21.281306   15521 system_pods.go:89] "metrics-server-84c5f94fbc-gpzsm" [d5937c63-7f30-477a-a36e-e7e6cb8c64e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:21.281316   15521 system_pods.go:89] "nvidia-device-plugin-daemonset-2dqft" [c5e363a8-697b-4396-acf2-c41232b01445] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 23:39:21.281333   15521 system_pods.go:89] "registry-66c9cd494c-h5ntb" [67fc5fdd-03ae-44c9-8e43-0042bd142349] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:21.281341   15521 system_pods.go:89] "registry-proxy-dc579" [76bec57d-6868-4098-a291-8c38dda98afc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:21.281349   15521 system_pods.go:89] "snapshot-controller-56fcc65765-2lpn2" [6ea26c65-7a9a-4d74-af4b-8f23ecc36bab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:21.281358   15521 system_pods.go:89] "snapshot-controller-56fcc65765-9mcdf" [bc592ae3-b020-465c-b0e9-c739e2321360] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:21.281363   15521 system_pods.go:89] "storage-provisioner" [25d0944a-e6b3-429b-bb81-22672fb100bd] Running
	I0923 23:39:21.281373   15521 system_pods.go:126] duration metric: took 206.049564ms to wait for k8s-apps to be running ...
	I0923 23:39:21.281382   15521 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 23:39:21.281439   15521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:39:21.331801   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:21.336577   15521 system_svc.go:56] duration metric: took 55.186723ms WaitForService to wait for kubelet
	I0923 23:39:21.336605   15521 kubeadm.go:582] duration metric: took 13.652846646s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:39:21.336621   15521 node_conditions.go:102] verifying NodePressure condition ...
	I0923 23:39:21.419377   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.475488   15521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 23:39:21.475526   15521 node_conditions.go:123] node cpu capacity is 2
	I0923 23:39:21.475539   15521 node_conditions.go:105] duration metric: took 138.911431ms to run NodePressure ...
	I0923 23:39:21.475552   15521 start.go:241] waiting for startup goroutines ...
	I0923 23:39:21.596433   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.596900   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.832085   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:21.919995   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.094469   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.094632   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.332058   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:22.418713   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.593037   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.593680   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.906061   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.007978   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.094529   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.097114   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.332565   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.419583   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.593672   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.593683   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.838655   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:23.940369   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.094234   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.094445   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.332440   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:24.419984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.594437   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.594618   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.832486   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:24.919747   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.093182   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.093674   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.333709   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:25.418934   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.593328   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.593509   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.833795   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:25.919508   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.095779   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.096176   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.332478   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:26.420244   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.592803   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.592852   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.832139   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:26.919522   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.093698   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.094342   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.332730   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:27.419502   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.593345   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.593632   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.831834   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:27.921584   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.096645   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.097094   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.332417   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:28.420270   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.593381   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.594222   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.832460   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:28.920981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.094116   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.095338   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.332575   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:29.418135   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.592957   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.593378   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.832141   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:29.919193   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.094376   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.094610   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.331854   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:30.418982   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.631569   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.632124   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.831219   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:30.920259   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.093449   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.093941   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.331877   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:31.420541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.593048   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.593342   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.832378   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:31.920762   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.098506   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.099810   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.332194   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:32.420510   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.593182   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.594918   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.832529   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:32.918771   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.093326   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.094439   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.333534   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:33.419199   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.592859   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.593822   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.832270   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:33.919972   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.093090   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.093582   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.332317   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:34.419955   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.593634   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.593974   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.831974   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:34.919981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.095441   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.095574   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.332597   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:35.419105   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.597103   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.598610   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.832611   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:35.918515   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.096274   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:36.096962   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.332610   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:36.418275   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.593642   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:36.593746   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.831957   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:36.918919   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.092996   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:37.094759   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.332016   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:37.419671   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.593331   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:37.595578   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.834102   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:37.920878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.094370   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:38.095095   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.331397   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:38.419908   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.593717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:38.594107   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.832074   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:38.919327   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.100170   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:39.105269   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.332638   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:39.420123   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.593249   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:39.593947   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.832313   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:39.934720   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.101376   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.101425   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:40.333365   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:40.420009   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.594942   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:40.595025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.833104   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:40.934806   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.096251   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:41.096260   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.332277   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:41.419410   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.592946   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:41.593974   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.832170   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:41.919227   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.097743   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.098213   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:42.332232   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:42.419177   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.593758   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:42.593875   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.832085   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:42.919621   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.094464   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:43.095025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.333021   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:43.419417   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.593281   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:43.594091   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.833444   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:43.920229   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.094691   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:44.096056   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.333071   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:44.418650   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.593421   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.594195   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:44.831531   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:44.920239   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.093437   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.095439   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:45.332168   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:45.419471   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.593901   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:45.594317   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.831984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:45.919515   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.094625   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:46.094773   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:46.331386   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:46.419464   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.592656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:46.592778   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.151142   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.153387   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.154491   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:47.154846   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.332656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.418895   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.592742   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.593598   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:47.832577   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:47.918632   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:48.094668   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:48.094918   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:48.332151   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:48.419591   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:48.592271   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:48.593354   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:48.832266   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:48.918810   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.094750   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:49.094891   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.331944   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:49.419208   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.592843   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:49.593229   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.832432   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:49.920038   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.102686   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:50.104285   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.332178   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:50.420344   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.593984   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:50.594056   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.831923   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:50.918641   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.095025   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.096939   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:51.332546   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:51.419516   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.592980   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:51.594380   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.832001   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:51.921419   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.101749   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:52.102309   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.332228   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:52.419595   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.593016   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:52.593128   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.832003   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:52.919630   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.094969   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:53.095135   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.331766   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:53.418814   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.593958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:53.594088   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.832408   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:53.919175   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.098190   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:54.098600   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:54.332298   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:54.420609   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.592767   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:54.593349   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:54.832382   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:54.920230   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.094591   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:55.094839   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.332431   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:55.433787   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.593168   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:55.593371   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.832283   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:55.919461   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.093372   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:56.093870   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.331722   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:56.418785   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.594030   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:56.594601   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.833680   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:56.918880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.096144   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:57.096359   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.332149   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.418862   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.593466   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:57.593899   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.832901   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.919069   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.097832   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:58.098492   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.331809   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.419172   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.594374   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:58.594557   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.832190   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.919483   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.095468   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:59.095749   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.332135   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.419091   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.593927   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:59.594515   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.831815   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.919106   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.512087   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:00.512527   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.512554   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.513598   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.593901   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.595207   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:00.834143   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.941222   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.095958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:01.097955   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.332030   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.420181   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.593185   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:01.593891   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.832201   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.919404   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.094442   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:02.094695   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.332203   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.419407   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.592715   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:02.592806   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.831864   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.919302   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:03.093356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:03.095261   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:03.331951   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:03.419462   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:03.593257   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:03.594217   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.004211   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.007581   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:04.094485   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.096445   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:04.332624   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.418492   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:04.601985   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.615874   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:04.833660   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.918788   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:05.092856   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:05.092889   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.331911   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.419042   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:05.592983   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:05.593592   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.832164   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.930850   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:06.095313   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.095850   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:06.332770   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.419623   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:06.595241   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:06.598108   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.831586   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.923862   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:07.094981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:07.095013   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.332001   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.419422   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:07.592356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:07.592854   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.832579   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.921160   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:08.093155   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:08.093461   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.332206   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.420123   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:08.594084   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:40:08.594501   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.832833   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.918969   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:09.095290   15521 kapi.go:107] duration metric: took 54.006756194s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 23:40:09.096731   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.331593   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.419268   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:09.593290   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.832184   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.919379   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:10.206829   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.332592   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.418826   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:10.597305   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.833495   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.936556   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:11.093468   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.331762   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.419043   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:11.593818   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.831965   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.919356   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:12.095949   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.332439   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.419717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:12.593847   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.833772   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.936727   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:13.095359   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:13.332979   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.434589   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:13.593982   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:13.833463   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.921413   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:14.107863   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:14.331881   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.418472   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:14.592625   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:14.832074   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.919102   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:15.151319   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:15.331731   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.418730   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:15.592769   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:15.832559   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.919783   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:16.094071   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:16.332982   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.420635   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:16.596117   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:16.832581   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.918622   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:17.094831   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:17.331470   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.419656   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:17.594098   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:17.832476   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.918799   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:18.289234   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:18.332999   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.419337   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:18.593958   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:18.831972   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.918707   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:19.093792   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:19.332292   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.420611   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:19.593588   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:19.831910   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.918861   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:20.093950   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:20.332717   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.436822   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:20.595463   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:20.832311   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.935013   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:21.096203   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:21.331541   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.422657   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:21.598324   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:21.831455   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.919629   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:22.096231   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:22.331596   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.418599   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:22.609832   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:22.833773   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.935924   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:23.096601   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:23.340106   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.427732   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:23.594048   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:23.832622   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.919229   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:24.093122   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:24.331790   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.418786   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:24.593043   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:24.833183   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.918861   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:25.094139   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:25.334542   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.576086   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:25.593252   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:25.832880   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.918530   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:26.092931   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:26.332596   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.419989   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:26.594948   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:26.932785   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.935292   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:27.093377   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:27.332423   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.421072   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:27.593187   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:27.832254   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.919838   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:28.093230   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:28.392143   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.687547   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:28.689317   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:28.832925   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.918921   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:29.100236   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:29.332915   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.420261   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:29.600887   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:29.833156   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.920177   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:30.093272   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:30.331488   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.418456   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:30.592224   15521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:30.832145   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.943704   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:31.134913   15521 kapi.go:107] duration metric: took 1m16.046381203s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 23:40:31.332777   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:31.418878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:31.831745   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:31.933578   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:32.332878   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:32.418865   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:32.831981   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:32.919636   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:33.331958   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:33.433535   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:33.834818   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.031559   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:34.332506   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.419243   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:34.832458   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.919551   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:35.332538   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.419333   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:35.831854   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.919140   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:36.332139   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.419385   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:36.831428   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.933407   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:37.332127   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:37.419248   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:37.834890   15521 kapi.go:107] duration metric: took 1m19.006594431s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 23:40:37.837227   15521 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-823099 cluster.
	I0923 23:40:37.838804   15521 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 23:40:37.840390   15521 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 23:40:37.936294   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:38.419888   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:38.918688   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:39.419929   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:39.918705   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:40.419944   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:40.919268   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:41.418798   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:41.920203   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:42.418923   15521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:42.920850   15521 kapi.go:107] duration metric: took 1m25.506584753s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 23:40:42.922731   15521 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0923 23:40:42.924695   15521 addons.go:510] duration metric: took 1m35.240916092s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0923 23:40:42.924745   15521 start.go:246] waiting for cluster config update ...
	I0923 23:40:42.924763   15521 start.go:255] writing updated cluster config ...
	I0923 23:40:42.925016   15521 ssh_runner.go:195] Run: rm -f paused
	I0923 23:40:42.977325   15521 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 23:40:42.979331   15521 out.go:177] * Done! kubectl is now configured to use "addons-823099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.173496837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135674173471665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=236361cc-8d3c-455b-b4ad-5995a9120954 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.174128382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf1fa02e-1366-46dc-ab93-e86cf9e5d595 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.174194412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf1fa02e-1366-46dc-ab93-e86cf9e5d595 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.174427683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751
054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf1fa02e-1366-46dc-ab93-e86cf9e5d595 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.212301784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d52b946-3aee-423d-9a92-950a4156f172 name=/runtime.v1.RuntimeService/Version
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.212388362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d52b946-3aee-423d-9a92-950a4156f172 name=/runtime.v1.RuntimeService/Version
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.213688902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=985d76e5-1413-45e1-abd8-92aebbcbcd62 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.214978145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135674214947555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=985d76e5-1413-45e1-abd8-92aebbcbcd62 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.215537253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=320bdc3f-d767-4470-a73e-07ffd56347ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.215592102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=320bdc3f-d767-4470-a73e-07ffd56347ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.215880327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751
054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=320bdc3f-d767-4470-a73e-07ffd56347ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.250526607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e106545d-5b7d-4526-8b68-b6b40244dc7c name=/runtime.v1.RuntimeService/Version
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.250622568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e106545d-5b7d-4526-8b68-b6b40244dc7c name=/runtime.v1.RuntimeService/Version
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.251581837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2028afad-1e2d-43ca-8685-8ce904d932c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.252908945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135674252875530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2028afad-1e2d-43ca-8685-8ce904d932c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.253469096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce8da404-f60b-431f-ba3b-cce85e5e883a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.253540507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce8da404-f60b-431f-ba3b-cce85e5e883a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.253814255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751
054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce8da404-f60b-431f-ba3b-cce85e5e883a name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.289552876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95b8e2f6-94c3-4954-acb5-11a0ad22087e name=/runtime.v1.RuntimeService/Version
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.289638400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95b8e2f6-94c3-4954-acb5-11a0ad22087e name=/runtime.v1.RuntimeService/Version
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.290692473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eedde30e-a354-4cea-940c-f509fe950c52 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.291915468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135674291889094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eedde30e-a354-4cea-940c-f509fe950c52 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.292691090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83ebcc13-467d-4fef-aeca-011e2eb738b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.292803672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83ebcc13-467d-4fef-aeca-011e2eb738b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 23 23:54:34 addons-823099 crio[662]: time="2024-09-23 23:54:34.293301261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4df081e2b365aec49d7e1931e92668e6875c967edba7adbd10a2137cb5bc085,PodSandboxId:cba63186abb30b45d3845c0acb4d0f223862ab132664b6cd5e08a285c8e52407,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727135546997701995,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-cpzkz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae9e1b8b-5470-4765-a8d1-7e21fa0eb9b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a477f91b06dc39c8aac1f0ceaf25be2f3cdd1467c593f76989667dd176147158,PodSandboxId:000b770e9460eb1f6cbc53493e042e7be24fb373949ac32f0a6e1497455d4304,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727135405458555145,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4679ef89-e297-4f54-bf30-b685a88ec238,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab,PodSandboxId:ef699a0a58d26bbb175080a9d5d1552d3ca4ad0ef72d3b7f2f3f042548a8de86,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1727134836524114147,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-5p9gw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a4541728-f355-433e-92a7-e435eb2600c2,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6c45d33f3679488d83d327a8f47c1bfb699c4b85d227cedef6b502629f4c13,PodSandboxId:c24a2665a62ab69af77896e4f6cdfa80944931f16aa279c745fac778bf371209,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727134779803969605,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-gpzsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5937c63-7f30-477a-a36e-e7e6cb8c64e5,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b,PodSandboxId:8d3fbd5782869ef1bd266d8984a9cbedcd8fab60b6229f2ab72750e7e22e081e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1727134755012677707,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d0944a-e6b3-429b-bb81-22672fb100bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de,PodSandboxId:743b6ef05346bc2b74363f050e3be9e406acedab4e81d88a5b62118373703ea4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727134751
054708895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h4m6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5a66fda-ace2-434e-82fb-3d9d66fac49f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd,PodSandboxId:e2cf37b2ed9608a016a28531c7475e72b8a57c4abd9862b68e3c5c2777ad76ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727134749291914697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgclm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47a25a-ab05-4197-975a-88bb7e1f9834,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2,PodSandboxId:4c45c732428c2d481624384e0b5a0d5cc14eeb3539e67aa0282e15d808a2d141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727134736753032764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0522b4889e5d09bd02bded87708cffa,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6,PodSandboxId:9992b2a049a9e5db7c453409b74739e9d45cb2ddc1916561d617bc92ca4abc8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727134736756693447,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd9c00eb951fdfb5b859f5c493b5daeb,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7,PodSandboxId:6600118fb556ee2332595b87d6131714a2992ff33108a3d8ff1ede5fa6031a1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727134736749151930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ed7b82912b8c176021821ce705d70e9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960,PodSandboxId:858af16c1b9748a0a50df5d32921302b8034b3b19aa9b08a44e91402f5f24332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727134736741313468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448360a30c028a8b320f55cec49cc907,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83ebcc13-467d-4fef-aeca-011e2eb738b0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4df081e2b365       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   cba63186abb30       hello-world-app-55bf9c44b4-cpzkz
	a477f91b06dc3       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago       Running             nginx                     0                   000b770e9460e       nginx
	74b1f1c0ea595       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago      Running             gcp-auth                  0                   ef699a0a58d26       gcp-auth-89d5ffd79-5p9gw
	ad6c45d33f367       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   c24a2665a62ab       metrics-server-84c5f94fbc-gpzsm
	9490eb926210d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   8d3fbd5782869       storage-provisioner
	4fec583ae4c3f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   743b6ef05346b       coredns-7c65d6cfc9-h4m6q
	8a92c92c6afdd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   e2cf37b2ed960       kube-proxy-pgclm
	f68819f7bf59d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   9992b2a049a9e       kube-controller-manager-addons-823099
	474072cb31ae5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   4c45c732428c2       kube-apiserver-addons-823099
	9f9a68d35a007       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   6600118fb556e       etcd-addons-823099
	61a194a33123e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   858af16c1b974       kube-scheduler-addons-823099
	
	
	==> coredns [4fec583ae4c3fad78cd32df65311f48a1cd55dcc8d1d6b99f649cd4ca93893de] <==
	[INFO] 10.244.0.5:51161 - 56746 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000030131s
	[INFO] 10.244.0.5:59845 - 51818 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171416s
	[INFO] 10.244.0.5:59845 - 28005 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00029495s
	[INFO] 10.244.0.5:48681 - 19317 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123377s
	[INFO] 10.244.0.5:48681 - 63336 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044081s
	[INFO] 10.244.0.5:58061 - 30895 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00008088s
	[INFO] 10.244.0.5:58061 - 32689 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035396s
	[INFO] 10.244.0.5:38087 - 48114 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035784s
	[INFO] 10.244.0.5:38087 - 54000 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000095969s
	[INFO] 10.244.0.5:49683 - 11480 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000140959s
	[INFO] 10.244.0.5:49683 - 23003 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101135s
	[INFO] 10.244.0.5:43005 - 38126 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081593s
	[INFO] 10.244.0.5:43005 - 47596 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124387s
	[INFO] 10.244.0.5:55804 - 41138 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171789s
	[INFO] 10.244.0.5:55804 - 44976 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000182833s
	[INFO] 10.244.0.5:43069 - 16307 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089434s
	[INFO] 10.244.0.5:43069 - 51633 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000032833s
	[INFO] 10.244.0.21:46303 - 62968 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000606757s
	[INFO] 10.244.0.21:36097 - 35733 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000696905s
	[INFO] 10.244.0.21:56566 - 45315 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136557s
	[INFO] 10.244.0.21:57939 - 56430 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000207858s
	[INFO] 10.244.0.21:51280 - 40828 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010631s
	[INFO] 10.244.0.21:50116 - 49864 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098666s
	[INFO] 10.244.0.21:45441 - 35920 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001078608s
	[INFO] 10.244.0.21:48980 - 17136 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00159345s
	
	
	==> describe nodes <==
	Name:               addons-823099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-823099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=addons-823099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T23_39_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-823099
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 23:39:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-823099
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 23:54:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 23:52:39 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 23:52:39 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 23:52:39 +0000   Mon, 23 Sep 2024 23:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 23:52:39 +0000   Mon, 23 Sep 2024 23:39:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    addons-823099
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a6fccd6b081441ba6dbe75955b7b20d
	  System UUID:                8a6fccd6-b081-441b-a6db-e75955b7b20d
	  Boot ID:                    cf9ab547-5350-4131-950e-b30d60dc335d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-cpzkz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  gcp-auth                    gcp-auth-89d5ffd79-5p9gw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-h4m6q                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-823099                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-823099             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-823099    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-pgclm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-823099             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x6 over 15m)  kubelet          Node addons-823099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x6 over 15m)  kubelet          Node addons-823099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x5 over 15m)  kubelet          Node addons-823099 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-823099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-823099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-823099 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-823099 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-823099 event: Registered Node addons-823099 in Controller
	
	
	==> dmesg <==
	[  +5.264434] kauditd_printk_skb: 126 callbacks suppressed
	[  +5.568231] kauditd_printk_skb: 64 callbacks suppressed
	[Sep23 23:40] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.362606] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.102278] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.160095] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.116043] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.170005] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.039714] kauditd_printk_skb: 15 callbacks suppressed
	[Sep23 23:41] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:46] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 23:48] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.347854] kauditd_printk_skb: 6 callbacks suppressed
	[Sep23 23:49] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.843786] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.530829] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.922969] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.060717] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.416967] kauditd_printk_skb: 2 callbacks suppressed
	[ +21.356928] kauditd_printk_skb: 15 callbacks suppressed
	[Sep23 23:50] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.047412] kauditd_printk_skb: 10 callbacks suppressed
	[Sep23 23:52] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.376350] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [9f9a68d35a007d5e1022596b9270e5f5f9735806aa3bfb8c01b9c7eca1ee01d7] <==
	{"level":"info","ts":"2024-09-23T23:40:28.672067Z","caller":"traceutil/trace.go:171","msg":"trace[1280673185] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"266.654436ms","start":"2024-09-23T23:40:28.405402Z","end":"2024-09-23T23:40:28.672056Z","steps":["trace[1280673185] 'range keys from in-memory index tree'  (duration: 266.517094ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:40:34.016260Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.157586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:40:34.016381Z","caller":"traceutil/trace.go:171","msg":"trace[611313623] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"112.292285ms","start":"2024-09-23T23:40:33.904078Z","end":"2024-09-23T23:40:34.016370Z","steps":["trace[611313623] 'range keys from in-memory index tree'  (duration: 112.015896ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:51.932772Z","caller":"traceutil/trace.go:171","msg":"trace[938951164] linearizableReadLoop","detail":"{readStateIndex:2080; appliedIndex:2079; }","duration":"354.39626ms","start":"2024-09-23T23:48:51.578308Z","end":"2024-09-23T23:48:51.932704Z","steps":["trace[938951164] 'read index received'  (duration: 354.29951ms)","trace[938951164] 'applied index is now lower than readState.Index'  (duration: 95.406µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T23:48:51.932778Z","caller":"traceutil/trace.go:171","msg":"trace[488687598] transaction","detail":"{read_only:false; response_revision:1941; number_of_response:1; }","duration":"381.82676ms","start":"2024-09-23T23:48:51.550869Z","end":"2024-09-23T23:48:51.932696Z","steps":["trace[488687598] 'process raft request'  (duration: 381.708173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.933377Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T23:48:51.550851Z","time spent":"382.387698ms","remote":"127.0.0.1:42030","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1940 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-23T23:48:51.933874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.560182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-09-23T23:48:51.934178Z","caller":"traceutil/trace.go:171","msg":"trace[470184168] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:1941; }","duration":"355.861174ms","start":"2024-09-23T23:48:51.578304Z","end":"2024-09-23T23:48:51.934165Z","steps":["trace[470184168] 'agreement among raft nodes before linearized reading'  (duration: 355.490488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.934287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T23:48:51.578272Z","time spent":"356.004044ms","remote":"127.0.0.1:41984","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":597,"request content":"key:\"/registry/namespaces/gadget\" "}
	{"level":"warn","ts":"2024-09-23T23:48:51.934489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.892301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:48:51.937084Z","caller":"traceutil/trace.go:171","msg":"trace[537662719] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:1941; }","duration":"222.90761ms","start":"2024-09-23T23:48:51.714161Z","end":"2024-09-23T23:48:51.937069Z","steps":["trace[537662719] 'agreement among raft nodes before linearized reading'  (duration: 218.829971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:48:51.934806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.030994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:48:51.937364Z","caller":"traceutil/trace.go:171","msg":"trace[1456298499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1941; }","duration":"140.590398ms","start":"2024-09-23T23:48:51.796765Z","end":"2024-09-23T23:48:51.937356Z","steps":["trace[1456298499] 'agreement among raft nodes before linearized reading'  (duration: 138.021442ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:58.904119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1508}
	{"level":"info","ts":"2024-09-23T23:48:58.947160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1508,"took":"42.440235ms","hash":2968136522,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3551232,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-23T23:48:58.947271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2968136522,"revision":1508,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T23:50:01.039784Z","caller":"traceutil/trace.go:171","msg":"trace[1259820916] linearizableReadLoop","detail":"{readStateIndex:2578; appliedIndex:2577; }","duration":"221.284347ms","start":"2024-09-23T23:50:00.818434Z","end":"2024-09-23T23:50:01.039718Z","steps":["trace[1259820916] 'read index received'  (duration: 221.162419ms)","trace[1259820916] 'applied index is now lower than readState.Index'  (duration: 121.469µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T23:50:01.040067Z","caller":"traceutil/trace.go:171","msg":"trace[1983157388] transaction","detail":"{read_only:false; response_revision:2414; number_of_response:1; }","duration":"254.058116ms","start":"2024-09-23T23:50:00.785999Z","end":"2024-09-23T23:50:01.040058Z","steps":["trace[1983157388] 'process raft request'  (duration: 253.637047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:50:01.040336Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.886806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-hostpathplugin-health-monitor-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:50:01.040368Z","caller":"traceutil/trace.go:171","msg":"trace[731576186] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-hostpathplugin-health-monitor-role; range_end:; response_count:0; response_revision:2414; }","duration":"221.941895ms","start":"2024-09-23T23:50:00.818418Z","end":"2024-09-23T23:50:01.040359Z","steps":["trace[731576186] 'agreement among raft nodes before linearized reading'  (duration: 221.835534ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:50:01.040472Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.507458ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:50:01.040485Z","caller":"traceutil/trace.go:171","msg":"trace[587039362] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2414; }","duration":"194.522207ms","start":"2024-09-23T23:50:00.845959Z","end":"2024-09-23T23:50:01.040481Z","steps":["trace[587039362] 'agreement among raft nodes before linearized reading'  (duration: 194.501032ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:53:58.911036Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2005}
	{"level":"info","ts":"2024-09-23T23:53:58.933462Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2005,"took":"21.705166ms","hash":3018657637,"current-db-size-bytes":6422528,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":4706304,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-23T23:53:58.933565Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3018657637,"revision":2005,"compact-revision":1508}
	
	
	==> gcp-auth [74b1f1c0ea595ad9a254db104eeae56801bee662d3a36f586d4eadc290bd61ab] <==
	2024/09/23 23:40:43 Ready to write response ...
	2024/09/23 23:40:43 Ready to marshal response ...
	2024/09/23 23:40:43 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:46 Ready to marshal response ...
	2024/09/23 23:48:46 Ready to write response ...
	2024/09/23 23:48:57 Ready to marshal response ...
	2024/09/23 23:48:57 Ready to write response ...
	2024/09/23 23:49:08 Ready to marshal response ...
	2024/09/23 23:49:08 Ready to write response ...
	2024/09/23 23:49:08 Ready to marshal response ...
	2024/09/23 23:49:08 Ready to write response ...
	2024/09/23 23:49:19 Ready to marshal response ...
	2024/09/23 23:49:19 Ready to write response ...
	2024/09/23 23:49:28 Ready to marshal response ...
	2024/09/23 23:49:28 Ready to write response ...
	2024/09/23 23:49:49 Ready to marshal response ...
	2024/09/23 23:49:49 Ready to write response ...
	2024/09/23 23:50:01 Ready to marshal response ...
	2024/09/23 23:50:01 Ready to write response ...
	2024/09/23 23:52:24 Ready to marshal response ...
	2024/09/23 23:52:24 Ready to write response ...
	
	
	==> kernel <==
	 23:54:34 up 16 min,  0 users,  load average: 0.15, 0.28, 0.33
	Linux addons-823099 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [474072cb31ae52ea361c41a97e7a53faf47c3b8ab138749903f3d96750c6fbe2] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0923 23:40:43.868287       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.117.164:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.117.164:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.117.164:443: connect: connection refused" logger="UnhandledError"
	I0923 23:40:43.895370       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 23:48:46.697802       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.162.121"}
	I0923 23:48:52.010546       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 23:48:53.182523       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0923 23:49:35.935062       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 23:49:42.049169       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 23:50:00.784922       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 23:50:01.287315       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.155.191"}
	I0923 23:50:06.825652       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.825802       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:06.916381       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.916612       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:06.930144       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.930205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:06.977265       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:06.977309       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:07.004023       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:07.004068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 23:50:07.979088       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 23:50:08.002119       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0923 23:50:08.006494       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0923 23:52:24.308097       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.245.50"}
	
	
	==> kube-controller-manager [f68819f7bf59d41865dee2cade7e270c9133c2249756217428544bee43d41ba6] <==
	W0923 23:52:30.936834       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:52:30.936887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:52:36.750862       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0923 23:52:39.409146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-823099"
	W0923 23:52:52.943504       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:52:52.943630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:52:55.000389       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:52:55.000541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:53:02.301406       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:53:02.301582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:53:21.053385       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:53:21.053462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:53:27.974700       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:53:27.974805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:53:41.915709       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:53:41.915951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:53:49.335852       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:53:49.336037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:54:03.772399       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:54:03.772557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:54:12.776897       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:54:12.776958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:54:25.940292       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:54:25.940504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:54:33.231536       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="13.347µs"
	
	
	==> kube-proxy [8a92c92c6afdd44516af6d2f0c2ba0c60c100397592f176560d683b0e5c58bbd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 23:39:10.263461       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 23:39:10.290295       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.29"]
	E0923 23:39:10.290387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:39:10.374009       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 23:39:10.374057       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 23:39:10.374082       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:39:10.378689       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:39:10.379053       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:39:10.379077       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:39:10.380385       1 config.go:199] "Starting service config controller"
	I0923 23:39:10.380428       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:39:10.380516       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:39:10.380522       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:39:10.381090       1 config.go:328] "Starting node config controller"
	I0923 23:39:10.381097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:39:10.480784       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 23:39:10.480823       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:39:10.481148       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [61a194a33123eba4aa22b6f557d4ea66df750535623ed92cd3efa6db3df98960] <==
	W0923 23:39:00.991081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 23:39:00.991286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:00.992946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 23:39:00.993078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.018368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 23:39:01.018501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.040390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 23:39:01.040489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.048983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.049065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.052890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 23:39:01.053031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.108077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 23:39:01.108124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.219095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.219241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.237429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 23:39:01.237504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.286444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 23:39:01.286579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:01.476657       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:39:01.476716       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 23:39:01.491112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 23:39:01.491224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 23:39:03.306204       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 23:53:53 addons-823099 kubelet[1203]: E0923 23:53:53.089862    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135633088278745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:53:53 addons-823099 kubelet[1203]: E0923 23:53:53.089953    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135633088278745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:02 addons-823099 kubelet[1203]: E0923 23:54:02.727822    1203 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 23:54:02 addons-823099 kubelet[1203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 23:54:02 addons-823099 kubelet[1203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 23:54:02 addons-823099 kubelet[1203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 23:54:02 addons-823099 kubelet[1203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 23:54:03 addons-823099 kubelet[1203]: E0923 23:54:03.092279    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135643091815911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:03 addons-823099 kubelet[1203]: E0923 23:54:03.092304    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135643091815911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:04 addons-823099 kubelet[1203]: E0923 23:54:04.714145    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="764b3703-5e4f-45c1-941e-d137062ab058"
	Sep 23 23:54:13 addons-823099 kubelet[1203]: E0923 23:54:13.094786    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135653094217892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:13 addons-823099 kubelet[1203]: E0923 23:54:13.094839    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135653094217892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:18 addons-823099 kubelet[1203]: E0923 23:54:18.714630    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="764b3703-5e4f-45c1-941e-d137062ab058"
	Sep 23 23:54:23 addons-823099 kubelet[1203]: E0923 23:54:23.098081    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135663097451276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:23 addons-823099 kubelet[1203]: E0923 23:54:23.098117    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135663097451276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:32 addons-823099 kubelet[1203]: E0923 23:54:32.714255    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="764b3703-5e4f-45c1-941e-d137062ab058"
	Sep 23 23:54:33 addons-823099 kubelet[1203]: E0923 23:54:33.102233    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135673101536284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:33 addons-823099 kubelet[1203]: E0923 23:54:33.102286    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727135673101536284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559238,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 23:54:33 addons-823099 kubelet[1203]: I0923 23:54:33.256821    1203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-cpzkz" podStartSLOduration=126.962335542 podStartE2EDuration="2m9.256788061s" podCreationTimestamp="2024-09-23 23:52:24 +0000 UTC" firstStartedPulling="2024-09-23 23:52:24.689411896 +0000 UTC m=+802.094249213" lastFinishedPulling="2024-09-23 23:52:26.983864405 +0000 UTC m=+804.388701732" observedRunningTime="2024-09-23 23:52:27.942721242 +0000 UTC m=+805.347558580" watchObservedRunningTime="2024-09-23 23:54:33.256788061 +0000 UTC m=+930.661625396"
	Sep 23 23:54:34 addons-823099 kubelet[1203]: I0923 23:54:34.606137    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d5937c63-7f30-477a-a36e-e7e6cb8c64e5-tmp-dir\") pod \"d5937c63-7f30-477a-a36e-e7e6cb8c64e5\" (UID: \"d5937c63-7f30-477a-a36e-e7e6cb8c64e5\") "
	Sep 23 23:54:34 addons-823099 kubelet[1203]: I0923 23:54:34.606184    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfs6j\" (UniqueName: \"kubernetes.io/projected/d5937c63-7f30-477a-a36e-e7e6cb8c64e5-kube-api-access-zfs6j\") pod \"d5937c63-7f30-477a-a36e-e7e6cb8c64e5\" (UID: \"d5937c63-7f30-477a-a36e-e7e6cb8c64e5\") "
	Sep 23 23:54:34 addons-823099 kubelet[1203]: I0923 23:54:34.606579    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5937c63-7f30-477a-a36e-e7e6cb8c64e5-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "d5937c63-7f30-477a-a36e-e7e6cb8c64e5" (UID: "d5937c63-7f30-477a-a36e-e7e6cb8c64e5"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 23 23:54:34 addons-823099 kubelet[1203]: I0923 23:54:34.612549    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5937c63-7f30-477a-a36e-e7e6cb8c64e5-kube-api-access-zfs6j" (OuterVolumeSpecName: "kube-api-access-zfs6j") pod "d5937c63-7f30-477a-a36e-e7e6cb8c64e5" (UID: "d5937c63-7f30-477a-a36e-e7e6cb8c64e5"). InnerVolumeSpecName "kube-api-access-zfs6j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:54:34 addons-823099 kubelet[1203]: I0923 23:54:34.706838    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zfs6j\" (UniqueName: \"kubernetes.io/projected/d5937c63-7f30-477a-a36e-e7e6cb8c64e5-kube-api-access-zfs6j\") on node \"addons-823099\" DevicePath \"\""
	Sep 23 23:54:34 addons-823099 kubelet[1203]: I0923 23:54:34.706871    1203 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d5937c63-7f30-477a-a36e-e7e6cb8c64e5-tmp-dir\") on node \"addons-823099\" DevicePath \"\""
	
	
	==> storage-provisioner [9490eb926210d595e48349ae8ba44feb029a56e6c83d0e8f8cfad8e8c1d9196b] <==
	I0923 23:39:15.614353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 23:39:15.653018       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 23:39:15.653076       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 23:39:15.688317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 23:39:15.688955       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68f268eb-f84e-4f3d-800b-baa6449c8a15", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9 became leader
	I0923 23:39:15.689716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9!
	I0923 23:39:15.790555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-823099_ffe49a31-62a1-4931-9d5c-b17e459b44c9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-823099 -n addons-823099
helpers_test.go:261: (dbg) Run:  kubectl --context addons-823099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox metrics-server-84c5f94fbc-gpzsm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-823099 describe pod busybox metrics-server-84c5f94fbc-gpzsm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-823099 describe pod busybox metrics-server-84c5f94fbc-gpzsm: exit status 1 (67.213941ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-823099/192.168.39.29
	Start Time:       Mon, 23 Sep 2024 23:40:43 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvbxz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nvbxz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-823099
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m39s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "metrics-server-84c5f94fbc-gpzsm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-823099 describe pod busybox metrics-server-84c5f94fbc-gpzsm: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (349.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 node stop m02 -v=7 --alsologtostderr
E0924 00:04:19.338925   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:05:00.300639   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:05:43.332941   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-959539 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.49433196s)

                                                
                                                
-- stdout --
	* Stopping node "ha-959539-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:04:13.158347   30312 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:04:13.158675   30312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:04:13.158692   30312 out.go:358] Setting ErrFile to fd 2...
	I0924 00:04:13.158714   30312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:04:13.159119   30312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:04:13.159380   30312 mustload.go:65] Loading cluster: ha-959539
	I0924 00:04:13.159765   30312 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:04:13.159785   30312 stop.go:39] StopHost: ha-959539-m02
	I0924 00:04:13.160186   30312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:04:13.160229   30312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:04:13.176709   30312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0924 00:04:13.177396   30312 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:04:13.177975   30312 main.go:141] libmachine: Using API Version  1
	I0924 00:04:13.177999   30312 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:04:13.178426   30312 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:04:13.180583   30312 out.go:177] * Stopping node "ha-959539-m02"  ...
	I0924 00:04:13.181692   30312 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 00:04:13.181732   30312 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:04:13.181966   30312 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 00:04:13.182000   30312 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:04:13.184948   30312 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:04:13.185375   30312 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:04:13.185403   30312 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:04:13.185619   30312 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:04:13.185836   30312 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:04:13.185999   30312 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:04:13.186133   30312 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:04:13.278935   30312 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 00:04:13.333290   30312 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 00:04:13.387233   30312 main.go:141] libmachine: Stopping "ha-959539-m02"...
	I0924 00:04:13.387260   30312 main.go:141] libmachine: (ha-959539-m02) Calling .GetState
	I0924 00:04:13.389087   30312 main.go:141] libmachine: (ha-959539-m02) Calling .Stop
	I0924 00:04:13.393112   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 0/120
	I0924 00:04:14.394824   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 1/120
	I0924 00:04:15.396314   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 2/120
	I0924 00:04:16.397498   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 3/120
	I0924 00:04:17.399001   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 4/120
	I0924 00:04:18.401556   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 5/120
	I0924 00:04:19.403078   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 6/120
	I0924 00:04:20.404617   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 7/120
	I0924 00:04:21.406119   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 8/120
	I0924 00:04:22.407921   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 9/120
	I0924 00:04:23.409497   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 10/120
	I0924 00:04:24.411470   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 11/120
	I0924 00:04:25.413687   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 12/120
	I0924 00:04:26.415221   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 13/120
	I0924 00:04:27.416655   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 14/120
	I0924 00:04:28.418970   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 15/120
	I0924 00:04:29.420850   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 16/120
	I0924 00:04:30.422793   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 17/120
	I0924 00:04:31.425131   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 18/120
	I0924 00:04:32.427142   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 19/120
	I0924 00:04:33.429413   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 20/120
	I0924 00:04:34.430855   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 21/120
	I0924 00:04:35.432314   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 22/120
	I0924 00:04:36.434003   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 23/120
	I0924 00:04:37.435821   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 24/120
	I0924 00:04:38.437951   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 25/120
	I0924 00:04:39.439306   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 26/120
	I0924 00:04:40.441873   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 27/120
	I0924 00:04:41.443377   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 28/120
	I0924 00:04:42.444882   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 29/120
	I0924 00:04:43.447378   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 30/120
	I0924 00:04:44.448658   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 31/120
	I0924 00:04:45.450099   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 32/120
	I0924 00:04:46.452580   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 33/120
	I0924 00:04:47.454177   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 34/120
	I0924 00:04:48.456154   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 35/120
	I0924 00:04:49.457692   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 36/120
	I0924 00:04:50.459163   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 37/120
	I0924 00:04:51.460645   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 38/120
	I0924 00:04:52.462675   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 39/120
	I0924 00:04:53.464945   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 40/120
	I0924 00:04:54.467219   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 41/120
	I0924 00:04:55.468641   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 42/120
	I0924 00:04:56.470958   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 43/120
	I0924 00:04:57.472843   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 44/120
	I0924 00:04:58.475019   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 45/120
	I0924 00:04:59.476522   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 46/120
	I0924 00:05:00.478823   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 47/120
	I0924 00:05:01.480182   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 48/120
	I0924 00:05:02.481644   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 49/120
	I0924 00:05:03.483934   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 50/120
	I0924 00:05:04.486275   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 51/120
	I0924 00:05:05.487678   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 52/120
	I0924 00:05:06.489146   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 53/120
	I0924 00:05:07.490690   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 54/120
	I0924 00:05:08.492684   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 55/120
	I0924 00:05:09.495062   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 56/120
	I0924 00:05:10.496434   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 57/120
	I0924 00:05:11.497977   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 58/120
	I0924 00:05:12.500221   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 59/120
	I0924 00:05:13.502556   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 60/120
	I0924 00:05:14.504041   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 61/120
	I0924 00:05:15.505789   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 62/120
	I0924 00:05:16.507340   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 63/120
	I0924 00:05:17.508604   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 64/120
	I0924 00:05:18.510841   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 65/120
	I0924 00:05:19.512823   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 66/120
	I0924 00:05:20.515050   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 67/120
	I0924 00:05:21.516894   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 68/120
	I0924 00:05:22.518381   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 69/120
	I0924 00:05:23.520662   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 70/120
	I0924 00:05:24.521998   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 71/120
	I0924 00:05:25.524153   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 72/120
	I0924 00:05:26.525597   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 73/120
	I0924 00:05:27.527162   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 74/120
	I0924 00:05:28.529817   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 75/120
	I0924 00:05:29.531320   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 76/120
	I0924 00:05:30.532825   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 77/120
	I0924 00:05:31.534967   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 78/120
	I0924 00:05:32.536364   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 79/120
	I0924 00:05:33.538688   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 80/120
	I0924 00:05:34.540137   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 81/120
	I0924 00:05:35.541506   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 82/120
	I0924 00:05:36.543040   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 83/120
	I0924 00:05:37.544507   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 84/120
	I0924 00:05:38.546538   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 85/120
	I0924 00:05:39.547900   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 86/120
	I0924 00:05:40.549293   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 87/120
	I0924 00:05:41.550899   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 88/120
	I0924 00:05:42.552782   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 89/120
	I0924 00:05:43.555061   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 90/120
	I0924 00:05:44.556309   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 91/120
	I0924 00:05:45.557842   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 92/120
	I0924 00:05:46.559429   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 93/120
	I0924 00:05:47.560999   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 94/120
	I0924 00:05:48.562913   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 95/120
	I0924 00:05:49.564587   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 96/120
	I0924 00:05:50.567111   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 97/120
	I0924 00:05:51.568783   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 98/120
	I0924 00:05:52.571325   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 99/120
	I0924 00:05:53.573000   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 100/120
	I0924 00:05:54.574906   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 101/120
	I0924 00:05:55.576286   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 102/120
	I0924 00:05:56.577691   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 103/120
	I0924 00:05:57.579351   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 104/120
	I0924 00:05:58.581537   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 105/120
	I0924 00:05:59.583226   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 106/120
	I0924 00:06:00.584649   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 107/120
	I0924 00:06:01.587381   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 108/120
	I0924 00:06:02.588760   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 109/120
	I0924 00:06:03.591112   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 110/120
	I0924 00:06:04.592833   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 111/120
	I0924 00:06:05.594782   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 112/120
	I0924 00:06:06.596388   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 113/120
	I0924 00:06:07.598867   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 114/120
	I0924 00:06:08.600897   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 115/120
	I0924 00:06:09.603020   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 116/120
	I0924 00:06:10.604588   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 117/120
	I0924 00:06:11.605992   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 118/120
	I0924 00:06:12.607413   30312 main.go:141] libmachine: (ha-959539-m02) Waiting for machine to stop 119/120
	I0924 00:06:13.608529   30312 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 00:06:13.608652   30312 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-959539 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
E0924 00:06:22.224471   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr: (18.72823581s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-959539 -n ha-959539
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 logs -n 25: (1.424772556s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m03_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m04 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp testdata/cp-test.txt                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m04_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03:/home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m03 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-959539 node stop m02 -v=7                                                     | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:59:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:59:26.807239   26218 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:59:26.807515   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:59:26.807525   26218 out.go:358] Setting ErrFile to fd 2...
	I0923 23:59:26.807529   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:59:26.807708   26218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:59:26.808255   26218 out.go:352] Setting JSON to false
	I0923 23:59:26.809081   26218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2511,"bootTime":1727133456,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:59:26.809190   26218 start.go:139] virtualization: kvm guest
	I0923 23:59:26.811490   26218 out.go:177] * [ha-959539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:59:26.813253   26218 notify.go:220] Checking for updates...
	I0923 23:59:26.813308   26218 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:59:26.814742   26218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:59:26.816098   26218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:59:26.817558   26218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:26.818772   26218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:59:26.819994   26218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:59:26.821406   26218 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:59:26.856627   26218 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 23:59:26.857800   26218 start.go:297] selected driver: kvm2
	I0923 23:59:26.857813   26218 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:59:26.857824   26218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:59:26.858493   26218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:59:26.858582   26218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:59:26.873962   26218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:59:26.874005   26218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:59:26.874238   26218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:59:26.874272   26218 cni.go:84] Creating CNI manager for ""
	I0923 23:59:26.874317   26218 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 23:59:26.874326   26218 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 23:59:26.874369   26218 start.go:340] cluster config:
	{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 23:59:26.874490   26218 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:59:26.876392   26218 out.go:177] * Starting "ha-959539" primary control-plane node in "ha-959539" cluster
	I0923 23:59:26.877566   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:59:26.877605   26218 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:59:26.877627   26218 cache.go:56] Caching tarball of preloaded images
	I0923 23:59:26.877724   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 23:59:26.877737   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 23:59:26.878058   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0923 23:59:26.878079   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json: {Name:mkb5e645fc53383c85997a2cb75a196eaec42645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:26.878228   26218 start.go:360] acquireMachinesLock for ha-959539: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 23:59:26.878263   26218 start.go:364] duration metric: took 19.539µs to acquireMachinesLock for "ha-959539"
	I0923 23:59:26.878286   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:59:26.878346   26218 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 23:59:26.879811   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 23:59:26.879957   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:59:26.879996   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:59:26.894584   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0923 23:59:26.895047   26218 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:59:26.895660   26218 main.go:141] libmachine: Using API Version  1
	I0923 23:59:26.895681   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:59:26.896020   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:59:26.896226   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:26.896388   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:26.896534   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0923 23:59:26.896578   26218 client.go:168] LocalClient.Create starting
	I0923 23:59:26.896605   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0923 23:59:26.896637   26218 main.go:141] libmachine: Decoding PEM data...
	I0923 23:59:26.896658   26218 main.go:141] libmachine: Parsing certificate...
	I0923 23:59:26.896703   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0923 23:59:26.896727   26218 main.go:141] libmachine: Decoding PEM data...
	I0923 23:59:26.896739   26218 main.go:141] libmachine: Parsing certificate...
	I0923 23:59:26.896757   26218 main.go:141] libmachine: Running pre-create checks...
	I0923 23:59:26.896765   26218 main.go:141] libmachine: (ha-959539) Calling .PreCreateCheck
	I0923 23:59:26.897146   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:26.897553   26218 main.go:141] libmachine: Creating machine...
	I0923 23:59:26.897565   26218 main.go:141] libmachine: (ha-959539) Calling .Create
	I0923 23:59:26.897712   26218 main.go:141] libmachine: (ha-959539) Creating KVM machine...
	I0923 23:59:26.899261   26218 main.go:141] libmachine: (ha-959539) DBG | found existing default KVM network
	I0923 23:59:26.899973   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:26.899836   26241 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I0923 23:59:26.900022   26218 main.go:141] libmachine: (ha-959539) DBG | created network xml: 
	I0923 23:59:26.900042   26218 main.go:141] libmachine: (ha-959539) DBG | <network>
	I0923 23:59:26.900051   26218 main.go:141] libmachine: (ha-959539) DBG |   <name>mk-ha-959539</name>
	I0923 23:59:26.900066   26218 main.go:141] libmachine: (ha-959539) DBG |   <dns enable='no'/>
	I0923 23:59:26.900077   26218 main.go:141] libmachine: (ha-959539) DBG |   
	I0923 23:59:26.900085   26218 main.go:141] libmachine: (ha-959539) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 23:59:26.900097   26218 main.go:141] libmachine: (ha-959539) DBG |     <dhcp>
	I0923 23:59:26.900105   26218 main.go:141] libmachine: (ha-959539) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 23:59:26.900116   26218 main.go:141] libmachine: (ha-959539) DBG |     </dhcp>
	I0923 23:59:26.900122   26218 main.go:141] libmachine: (ha-959539) DBG |   </ip>
	I0923 23:59:26.900132   26218 main.go:141] libmachine: (ha-959539) DBG |   
	I0923 23:59:26.900140   26218 main.go:141] libmachine: (ha-959539) DBG | </network>
	I0923 23:59:26.900211   26218 main.go:141] libmachine: (ha-959539) DBG | 
	I0923 23:59:26.905213   26218 main.go:141] libmachine: (ha-959539) DBG | trying to create private KVM network mk-ha-959539 192.168.39.0/24...
	I0923 23:59:26.977916   26218 main.go:141] libmachine: (ha-959539) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 ...
	I0923 23:59:26.977955   26218 main.go:141] libmachine: (ha-959539) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0923 23:59:26.977972   26218 main.go:141] libmachine: (ha-959539) DBG | private KVM network mk-ha-959539 192.168.39.0/24 created
	I0923 23:59:26.977988   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:26.977847   26241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:26.978009   26218 main.go:141] libmachine: (ha-959539) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0923 23:59:27.232339   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.232194   26241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa...
	I0923 23:59:27.673404   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.673251   26241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/ha-959539.rawdisk...
	I0923 23:59:27.673433   26218 main.go:141] libmachine: (ha-959539) DBG | Writing magic tar header
	I0923 23:59:27.673445   26218 main.go:141] libmachine: (ha-959539) DBG | Writing SSH key tar header
	I0923 23:59:27.673465   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.673358   26241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 ...
	I0923 23:59:27.673485   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 (perms=drwx------)
	I0923 23:59:27.673503   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539
	I0923 23:59:27.673514   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0923 23:59:27.673524   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0923 23:59:27.673532   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0923 23:59:27.673541   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 23:59:27.673551   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0923 23:59:27.673563   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:27.673577   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0923 23:59:27.673589   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 23:59:27.673598   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 23:59:27.673607   26218 main.go:141] libmachine: (ha-959539) Creating domain...
	I0923 23:59:27.673616   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins
	I0923 23:59:27.673623   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home
	I0923 23:59:27.673640   26218 main.go:141] libmachine: (ha-959539) DBG | Skipping /home - not owner
	I0923 23:59:27.674680   26218 main.go:141] libmachine: (ha-959539) define libvirt domain using xml: 
	I0923 23:59:27.674695   26218 main.go:141] libmachine: (ha-959539) <domain type='kvm'>
	I0923 23:59:27.674701   26218 main.go:141] libmachine: (ha-959539)   <name>ha-959539</name>
	I0923 23:59:27.674705   26218 main.go:141] libmachine: (ha-959539)   <memory unit='MiB'>2200</memory>
	I0923 23:59:27.674740   26218 main.go:141] libmachine: (ha-959539)   <vcpu>2</vcpu>
	I0923 23:59:27.674764   26218 main.go:141] libmachine: (ha-959539)   <features>
	I0923 23:59:27.674777   26218 main.go:141] libmachine: (ha-959539)     <acpi/>
	I0923 23:59:27.674788   26218 main.go:141] libmachine: (ha-959539)     <apic/>
	I0923 23:59:27.674801   26218 main.go:141] libmachine: (ha-959539)     <pae/>
	I0923 23:59:27.674828   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.674851   26218 main.go:141] libmachine: (ha-959539)   </features>
	I0923 23:59:27.674870   26218 main.go:141] libmachine: (ha-959539)   <cpu mode='host-passthrough'>
	I0923 23:59:27.674879   26218 main.go:141] libmachine: (ha-959539)   
	I0923 23:59:27.674889   26218 main.go:141] libmachine: (ha-959539)   </cpu>
	I0923 23:59:27.674905   26218 main.go:141] libmachine: (ha-959539)   <os>
	I0923 23:59:27.674917   26218 main.go:141] libmachine: (ha-959539)     <type>hvm</type>
	I0923 23:59:27.674943   26218 main.go:141] libmachine: (ha-959539)     <boot dev='cdrom'/>
	I0923 23:59:27.674960   26218 main.go:141] libmachine: (ha-959539)     <boot dev='hd'/>
	I0923 23:59:27.674974   26218 main.go:141] libmachine: (ha-959539)     <bootmenu enable='no'/>
	I0923 23:59:27.674985   26218 main.go:141] libmachine: (ha-959539)   </os>
	I0923 23:59:27.674997   26218 main.go:141] libmachine: (ha-959539)   <devices>
	I0923 23:59:27.675009   26218 main.go:141] libmachine: (ha-959539)     <disk type='file' device='cdrom'>
	I0923 23:59:27.675024   26218 main.go:141] libmachine: (ha-959539)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/boot2docker.iso'/>
	I0923 23:59:27.675037   26218 main.go:141] libmachine: (ha-959539)       <target dev='hdc' bus='scsi'/>
	I0923 23:59:27.675049   26218 main.go:141] libmachine: (ha-959539)       <readonly/>
	I0923 23:59:27.675060   26218 main.go:141] libmachine: (ha-959539)     </disk>
	I0923 23:59:27.675075   26218 main.go:141] libmachine: (ha-959539)     <disk type='file' device='disk'>
	I0923 23:59:27.675088   26218 main.go:141] libmachine: (ha-959539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 23:59:27.675111   26218 main.go:141] libmachine: (ha-959539)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/ha-959539.rawdisk'/>
	I0923 23:59:27.675127   26218 main.go:141] libmachine: (ha-959539)       <target dev='hda' bus='virtio'/>
	I0923 23:59:27.675141   26218 main.go:141] libmachine: (ha-959539)     </disk>
	I0923 23:59:27.675152   26218 main.go:141] libmachine: (ha-959539)     <interface type='network'>
	I0923 23:59:27.675165   26218 main.go:141] libmachine: (ha-959539)       <source network='mk-ha-959539'/>
	I0923 23:59:27.675175   26218 main.go:141] libmachine: (ha-959539)       <model type='virtio'/>
	I0923 23:59:27.675185   26218 main.go:141] libmachine: (ha-959539)     </interface>
	I0923 23:59:27.675192   26218 main.go:141] libmachine: (ha-959539)     <interface type='network'>
	I0923 23:59:27.675201   26218 main.go:141] libmachine: (ha-959539)       <source network='default'/>
	I0923 23:59:27.675206   26218 main.go:141] libmachine: (ha-959539)       <model type='virtio'/>
	I0923 23:59:27.675210   26218 main.go:141] libmachine: (ha-959539)     </interface>
	I0923 23:59:27.675217   26218 main.go:141] libmachine: (ha-959539)     <serial type='pty'>
	I0923 23:59:27.675222   26218 main.go:141] libmachine: (ha-959539)       <target port='0'/>
	I0923 23:59:27.675228   26218 main.go:141] libmachine: (ha-959539)     </serial>
	I0923 23:59:27.675247   26218 main.go:141] libmachine: (ha-959539)     <console type='pty'>
	I0923 23:59:27.675254   26218 main.go:141] libmachine: (ha-959539)       <target type='serial' port='0'/>
	I0923 23:59:27.675259   26218 main.go:141] libmachine: (ha-959539)     </console>
	I0923 23:59:27.675262   26218 main.go:141] libmachine: (ha-959539)     <rng model='virtio'>
	I0923 23:59:27.675273   26218 main.go:141] libmachine: (ha-959539)       <backend model='random'>/dev/random</backend>
	I0923 23:59:27.675279   26218 main.go:141] libmachine: (ha-959539)     </rng>
	I0923 23:59:27.675284   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.675289   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.675306   26218 main.go:141] libmachine: (ha-959539)   </devices>
	I0923 23:59:27.675324   26218 main.go:141] libmachine: (ha-959539) </domain>
	I0923 23:59:27.675341   26218 main.go:141] libmachine: (ha-959539) 
	I0923 23:59:27.679682   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:f8:7e:29 in network default
	I0923 23:59:27.680257   26218 main.go:141] libmachine: (ha-959539) Ensuring networks are active...
	I0923 23:59:27.680301   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:27.680992   26218 main.go:141] libmachine: (ha-959539) Ensuring network default is active
	I0923 23:59:27.681339   26218 main.go:141] libmachine: (ha-959539) Ensuring network mk-ha-959539 is active
	I0923 23:59:27.681827   26218 main.go:141] libmachine: (ha-959539) Getting domain xml...
	I0923 23:59:27.682529   26218 main.go:141] libmachine: (ha-959539) Creating domain...
	I0923 23:59:28.880638   26218 main.go:141] libmachine: (ha-959539) Waiting to get IP...
	I0923 23:59:28.881412   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:28.881793   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:28.881827   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:28.881764   26241 retry.go:31] will retry after 258.264646ms: waiting for machine to come up
	I0923 23:59:29.141441   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.141781   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.141818   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.141725   26241 retry.go:31] will retry after 275.827745ms: waiting for machine to come up
	I0923 23:59:29.419197   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.419582   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.419610   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.419535   26241 retry.go:31] will retry after 461.76652ms: waiting for machine to come up
	I0923 23:59:29.883216   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.883789   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.883811   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.883726   26241 retry.go:31] will retry after 445.570936ms: waiting for machine to come up
	I0923 23:59:30.331342   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:30.331760   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:30.331789   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:30.331719   26241 retry.go:31] will retry after 749.255419ms: waiting for machine to come up
	I0923 23:59:31.082478   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:31.082950   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:31.082971   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:31.082889   26241 retry.go:31] will retry after 773.348958ms: waiting for machine to come up
	I0923 23:59:31.857788   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:31.858274   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:31.858300   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:31.858204   26241 retry.go:31] will retry after 752.285326ms: waiting for machine to come up
	I0923 23:59:32.611583   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:32.612075   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:32.612098   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:32.612034   26241 retry.go:31] will retry after 1.137504115s: waiting for machine to come up
	I0923 23:59:33.751665   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:33.751976   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:33.752009   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:33.751932   26241 retry.go:31] will retry after 1.241947238s: waiting for machine to come up
	I0923 23:59:34.995017   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:34.995386   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:34.995400   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:34.995360   26241 retry.go:31] will retry after 1.449064591s: waiting for machine to come up
	I0923 23:59:36.446933   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:36.447337   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:36.447388   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:36.447302   26241 retry.go:31] will retry after 2.693587186s: waiting for machine to come up
	I0923 23:59:39.144265   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:39.144685   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:39.144701   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:39.144641   26241 retry.go:31] will retry after 2.637044367s: waiting for machine to come up
	I0923 23:59:41.785491   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:41.785902   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:41.785918   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:41.785859   26241 retry.go:31] will retry after 4.357362487s: waiting for machine to come up
	I0923 23:59:46.147970   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:46.148484   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:46.148509   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:46.148440   26241 retry.go:31] will retry after 4.358423196s: waiting for machine to come up
	I0923 23:59:50.510236   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.510860   26218 main.go:141] libmachine: (ha-959539) Found IP for machine: 192.168.39.231
	I0923 23:59:50.510881   26218 main.go:141] libmachine: (ha-959539) Reserving static IP address...
	I0923 23:59:50.510893   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has current primary IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.511347   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find host DHCP lease matching {name: "ha-959539", mac: "52:54:00:99:17:69", ip: "192.168.39.231"} in network mk-ha-959539
	I0923 23:59:50.583983   26218 main.go:141] libmachine: (ha-959539) DBG | Getting to WaitForSSH function...
	I0923 23:59:50.584012   26218 main.go:141] libmachine: (ha-959539) Reserved static IP address: 192.168.39.231
	I0923 23:59:50.584024   26218 main.go:141] libmachine: (ha-959539) Waiting for SSH to be available...
	I0923 23:59:50.587176   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.587581   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.587613   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.587727   26218 main.go:141] libmachine: (ha-959539) DBG | Using SSH client type: external
	I0923 23:59:50.587740   26218 main.go:141] libmachine: (ha-959539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa (-rw-------)
	I0923 23:59:50.587808   26218 main.go:141] libmachine: (ha-959539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 23:59:50.587835   26218 main.go:141] libmachine: (ha-959539) DBG | About to run SSH command:
	I0923 23:59:50.587849   26218 main.go:141] libmachine: (ha-959539) DBG | exit 0
	I0923 23:59:50.716142   26218 main.go:141] libmachine: (ha-959539) DBG | SSH cmd err, output: <nil>: 
	I0923 23:59:50.716469   26218 main.go:141] libmachine: (ha-959539) KVM machine creation complete!
	I0923 23:59:50.716772   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:50.717437   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:50.717627   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:50.717783   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 23:59:50.717794   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0923 23:59:50.719003   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 23:59:50.719017   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 23:59:50.719040   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 23:59:50.719051   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.721609   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.721907   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.721928   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.722195   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.722412   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.722565   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.722658   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.722805   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.723011   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.723021   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 23:59:50.835498   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:59:50.835520   26218 main.go:141] libmachine: Detecting the provisioner...
	I0923 23:59:50.835527   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.838284   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.838621   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.838642   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.838906   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.839085   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.839257   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.839424   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.839565   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.839743   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.839754   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 23:59:50.953371   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 23:59:50.953486   26218 main.go:141] libmachine: found compatible host: buildroot
	I0923 23:59:50.953499   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0923 23:59:50.953509   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:50.953724   26218 buildroot.go:166] provisioning hostname "ha-959539"
	I0923 23:59:50.953757   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:50.953954   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.956724   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.957082   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.957105   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.957309   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.957497   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.957638   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.957763   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.957932   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.958118   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.958139   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539 && echo "ha-959539" | sudo tee /etc/hostname
	I0923 23:59:51.087322   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539
	
	I0923 23:59:51.087357   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.090134   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.090488   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.090514   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.090720   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.090906   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.091125   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.091383   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.091616   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.091783   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.091798   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 23:59:51.216710   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:59:51.216741   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0923 23:59:51.216763   26218 buildroot.go:174] setting up certificates
	I0923 23:59:51.216772   26218 provision.go:84] configureAuth start
	I0923 23:59:51.216781   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:51.217050   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:51.219973   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.220311   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.220350   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.220472   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.223154   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.223541   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.223574   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.223732   26218 provision.go:143] copyHostCerts
	I0923 23:59:51.223760   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0923 23:59:51.223790   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0923 23:59:51.223807   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0923 23:59:51.223875   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0923 23:59:51.223951   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0923 23:59:51.223969   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0923 23:59:51.223976   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0923 23:59:51.223999   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0923 23:59:51.224038   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0923 23:59:51.224055   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0923 23:59:51.224060   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0923 23:59:51.224079   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0923 23:59:51.224140   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539 san=[127.0.0.1 192.168.39.231 ha-959539 localhost minikube]
	I0923 23:59:51.458115   26218 provision.go:177] copyRemoteCerts
	I0923 23:59:51.458172   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 23:59:51.458199   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.461001   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.461333   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.461358   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.461510   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.461701   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.461849   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.461970   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:51.550490   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 23:59:51.550562   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 23:59:51.574382   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 23:59:51.574471   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 23:59:51.597413   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 23:59:51.597507   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 23:59:51.620181   26218 provision.go:87] duration metric: took 403.395464ms to configureAuth
	I0923 23:59:51.620213   26218 buildroot.go:189] setting minikube options for container-runtime
	I0923 23:59:51.620452   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:59:51.620525   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.623330   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.623655   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.623683   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.623826   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.624031   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.624209   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.624360   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.624502   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.624659   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.624677   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 23:59:51.851847   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 23:59:51.851876   26218 main.go:141] libmachine: Checking connection to Docker...
	I0923 23:59:51.851883   26218 main.go:141] libmachine: (ha-959539) Calling .GetURL
	I0923 23:59:51.853119   26218 main.go:141] libmachine: (ha-959539) DBG | Using libvirt version 6000000
	I0923 23:59:51.855099   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.855420   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.855446   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.855586   26218 main.go:141] libmachine: Docker is up and running!
	I0923 23:59:51.855598   26218 main.go:141] libmachine: Reticulating splines...
	I0923 23:59:51.855605   26218 client.go:171] duration metric: took 24.959018357s to LocalClient.Create
	I0923 23:59:51.855625   26218 start.go:167] duration metric: took 24.959098074s to libmachine.API.Create "ha-959539"
	I0923 23:59:51.855634   26218 start.go:293] postStartSetup for "ha-959539" (driver="kvm2")
	I0923 23:59:51.855643   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:59:51.855656   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:51.855887   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:59:51.855913   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.858133   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.858438   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.858461   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.858627   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.858801   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.858953   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.859096   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:51.946855   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 23:59:51.950980   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 23:59:51.951009   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0923 23:59:51.951065   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0923 23:59:51.951158   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0923 23:59:51.951168   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0923 23:59:51.951319   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 23:59:51.960703   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0923 23:59:51.984127   26218 start.go:296] duration metric: took 128.479072ms for postStartSetup
	I0923 23:59:51.984203   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:51.984890   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:51.987429   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.987719   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.987746   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.987964   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0923 23:59:51.988154   26218 start.go:128] duration metric: took 25.109799181s to createHost
	I0923 23:59:51.988175   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.990588   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.990906   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.990929   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.991056   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.991238   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.991353   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.991456   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.991563   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.991778   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.991794   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 23:59:52.105105   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727135992.084651186
	
	I0923 23:59:52.105126   26218 fix.go:216] guest clock: 1727135992.084651186
	I0923 23:59:52.105133   26218 fix.go:229] Guest: 2024-09-23 23:59:52.084651186 +0000 UTC Remote: 2024-09-23 23:59:51.988165076 +0000 UTC m=+25.216110625 (delta=96.48611ms)
	I0923 23:59:52.105151   26218 fix.go:200] guest clock delta is within tolerance: 96.48611ms
	I0923 23:59:52.105156   26218 start.go:83] releasing machines lock for "ha-959539", held for 25.226882318s
	I0923 23:59:52.105171   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.105409   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:52.108347   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.108704   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.108728   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.108925   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109448   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109621   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109725   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 23:59:52.109775   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:52.109834   26218 ssh_runner.go:195] Run: cat /version.json
	I0923 23:59:52.109859   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:52.112538   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112714   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112781   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.112818   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112933   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:52.113055   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.113086   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.113164   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:52.113281   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:52.113341   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:52.113438   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:52.113503   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:52.113559   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:52.113735   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:52.193560   26218 ssh_runner.go:195] Run: systemctl --version
	I0923 23:59:52.235438   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 23:59:52.389606   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 23:59:52.396083   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 23:59:52.396147   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:59:52.413066   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 23:59:52.413095   26218 start.go:495] detecting cgroup driver to use...
	I0923 23:59:52.413158   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 23:59:52.429335   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 23:59:52.443813   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0923 23:59:52.443866   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 23:59:52.457675   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 23:59:52.471149   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 23:59:52.585355   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 23:59:52.737118   26218 docker.go:233] disabling docker service ...
	I0923 23:59:52.737174   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 23:59:52.752411   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 23:59:52.765194   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 23:59:52.901170   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 23:59:53.018250   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 23:59:53.031932   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:59:53.049015   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 23:59:53.049085   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.058948   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 23:59:53.059015   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.069147   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.079197   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.089022   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:59:53.100410   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.111370   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.128755   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.138944   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:59:53.149267   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 23:59:53.149363   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 23:59:53.163279   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:59:53.173965   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:59:53.305956   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 23:59:53.410170   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 23:59:53.410232   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 23:59:53.415034   26218 start.go:563] Will wait 60s for crictl version
	I0923 23:59:53.415112   26218 ssh_runner.go:195] Run: which crictl
	I0923 23:59:53.418927   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 23:59:53.464205   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 23:59:53.464285   26218 ssh_runner.go:195] Run: crio --version
	I0923 23:59:53.494495   26218 ssh_runner.go:195] Run: crio --version
	I0923 23:59:53.523488   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 23:59:53.524781   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:53.527608   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:53.527945   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:53.527972   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:53.528223   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 23:59:53.532189   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:59:53.544235   26218 kubeadm.go:883] updating cluster {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:59:53.544347   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:59:53.544395   26218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:59:53.574815   26218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 23:59:53.574879   26218 ssh_runner.go:195] Run: which lz4
	I0923 23:59:53.578616   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 23:59:53.578693   26218 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 23:59:53.582683   26218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 23:59:53.582711   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 23:59:54.823072   26218 crio.go:462] duration metric: took 1.244398494s to copy over tarball
	I0923 23:59:54.823158   26218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 23:59:56.834165   26218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.010972437s)
	I0923 23:59:56.834200   26218 crio.go:469] duration metric: took 2.011094658s to extract the tarball
	I0923 23:59:56.834211   26218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 23:59:56.870476   26218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:59:56.915807   26218 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 23:59:56.915830   26218 cache_images.go:84] Images are preloaded, skipping loading
	I0923 23:59:56.915839   26218 kubeadm.go:934] updating node { 192.168.39.231 8443 v1.31.1 crio true true} ...
	I0923 23:59:56.915955   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 23:59:56.916032   26218 ssh_runner.go:195] Run: crio config
	I0923 23:59:56.959047   26218 cni.go:84] Creating CNI manager for ""
	I0923 23:59:56.959065   26218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 23:59:56.959075   26218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:59:56.959102   26218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-959539 NodeName:ha-959539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:59:56.959278   26218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-959539"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:59:56.959306   26218 kube-vip.go:115] generating kube-vip config ...
	I0923 23:59:56.959355   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 23:59:56.975413   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 23:59:56.975538   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 23:59:56.975609   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:59:56.985748   26218 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 23:59:56.985816   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 23:59:56.994858   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 23:59:57.011080   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:59:57.026929   26218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 23:59:57.042586   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 23:59:57.058931   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 23:59:57.062598   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:59:57.074372   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:59:57.199368   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:59:57.215790   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.231
	I0923 23:59:57.215808   26218 certs.go:194] generating shared ca certs ...
	I0923 23:59:57.215839   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.215971   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0923 23:59:57.216007   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0923 23:59:57.216016   26218 certs.go:256] generating profile certs ...
	I0923 23:59:57.216061   26218 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0923 23:59:57.216073   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt with IP's: []
	I0923 23:59:57.346653   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt ...
	I0923 23:59:57.346676   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt: {Name:mkab4515ea7168cda846b9bfb46262aeaac2bc0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.346833   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key ...
	I0923 23:59:57.346843   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key: {Name:mke7708261b70539d80260dff7c5f1bd958774aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.346914   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b
	I0923 23:59:57.346929   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.254]
	I0923 23:59:57.635327   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b ...
	I0923 23:59:57.635354   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b: {Name:mk5117d1a9a492c25c6b0e468e2bf78a6f60d1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.635505   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b ...
	I0923 23:59:57.635516   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b: {Name:mk3539984a0fdd5eeb79a51663bcd250a224ff95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.635580   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0923 23:59:57.635646   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0923 23:59:57.635698   26218 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0923 23:59:57.635711   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt with IP's: []
	I0923 23:59:57.894945   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt ...
	I0923 23:59:57.894975   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt: {Name:mkc0621f207c72302b780ca13cb5032341f4b069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.895138   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key ...
	I0923 23:59:57.895150   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key: {Name:mkf18d3b3341960faadac2faed03cef051112574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.895217   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 23:59:57.895235   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 23:59:57.895245   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 23:59:57.895265   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 23:59:57.895277   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 23:59:57.895287   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 23:59:57.895299   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 23:59:57.895310   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 23:59:57.895353   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0923 23:59:57.895393   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0923 23:59:57.895403   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 23:59:57.895425   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0923 23:59:57.895449   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:59:57.895469   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0923 23:59:57.895505   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0923 23:59:57.895531   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0923 23:59:57.895542   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0923 23:59:57.895555   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:57.896068   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:59:57.920516   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 23:59:57.944180   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:59:57.973439   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 23:59:58.001892   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 23:59:58.026752   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 23:59:58.049022   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:59:58.071861   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 23:59:58.094850   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0923 23:59:58.120029   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0923 23:59:58.144719   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:59:58.174622   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:59:58.192664   26218 ssh_runner.go:195] Run: openssl version
	I0923 23:59:58.198435   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0923 23:59:58.208675   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.212997   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.213048   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.218554   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0923 23:59:58.228984   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0923 23:59:58.239539   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.244140   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.244200   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.249770   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 23:59:58.260444   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:59:58.271376   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.276012   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.276066   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.281610   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:59:58.291931   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:59:58.295609   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:59:58.295656   26218 kubeadm.go:392] StartCluster: {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:59:58.295736   26218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 23:59:58.295803   26218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 23:59:58.331462   26218 cri.go:89] found id: ""
	I0923 23:59:58.331531   26218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:59:58.341582   26218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:59:58.351079   26218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:59:58.360870   26218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:59:58.360891   26218 kubeadm.go:157] found existing configuration files:
	
	I0923 23:59:58.360931   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:59:58.370007   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:59:58.370064   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:59:58.379658   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:59:58.388923   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:59:58.388982   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:59:58.398781   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:59:58.407722   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:59:58.407786   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:59:58.417271   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:59:58.426264   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:59:58.426322   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:59:58.435999   26218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 23:59:58.546770   26218 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:59:58.546896   26218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:59:58.658868   26218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:59:58.659029   26218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:59:58.659118   26218 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:59:58.667816   26218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:59:58.762200   26218 out.go:235]   - Generating certificates and keys ...
	I0923 23:59:58.762295   26218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:59:58.762371   26218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:59:58.762428   26218 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:59:58.931425   26218 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:59:59.169435   26218 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:59:59.368885   26218 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:59:59.910983   26218 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:59:59.911147   26218 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-959539 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0924 00:00:00.027247   26218 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 00:00:00.027385   26218 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-959539 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0924 00:00:00.408901   26218 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 00:00:00.695628   26218 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 00:00:01.084765   26218 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 00:00:01.084831   26218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:00:01.198400   26218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:00:01.455815   26218 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 00:00:01.707214   26218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:00:01.761069   26218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:00:01.868085   26218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:00:01.868536   26218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:00:01.872192   26218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:00:01.874381   26218 out.go:235]   - Booting up control plane ...
	I0924 00:00:01.874504   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:00:01.874578   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:00:01.874634   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:00:01.890454   26218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:00:01.897634   26218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:00:01.897699   26218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:00:02.038440   26218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 00:00:02.038603   26218 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 00:00:02.541646   26218 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.471901ms
	I0924 00:00:02.541770   26218 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 00:00:11.738795   26218 kubeadm.go:310] [api-check] The API server is healthy after 9.198818169s
	I0924 00:00:11.752392   26218 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 00:00:11.768902   26218 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 00:00:11.811138   26218 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 00:00:11.811397   26218 kubeadm.go:310] [mark-control-plane] Marking the node ha-959539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 00:00:11.828918   26218 kubeadm.go:310] [bootstrap-token] Using token: a2tynl.1ohol4x4auhbv6gq
	I0924 00:00:11.830685   26218 out.go:235]   - Configuring RBAC rules ...
	I0924 00:00:11.830831   26218 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 00:00:11.844590   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 00:00:11.854514   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 00:00:11.858483   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 00:00:11.862691   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 00:00:11.866723   26218 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 00:00:12.143692   26218 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 00:00:12.683818   26218 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 00:00:13.148491   26218 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 00:00:13.149475   26218 kubeadm.go:310] 
	I0924 00:00:13.149539   26218 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 00:00:13.149548   26218 kubeadm.go:310] 
	I0924 00:00:13.149650   26218 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 00:00:13.149658   26218 kubeadm.go:310] 
	I0924 00:00:13.149681   26218 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 00:00:13.149743   26218 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 00:00:13.149832   26218 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 00:00:13.149862   26218 kubeadm.go:310] 
	I0924 00:00:13.149949   26218 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 00:00:13.149959   26218 kubeadm.go:310] 
	I0924 00:00:13.150027   26218 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 00:00:13.150036   26218 kubeadm.go:310] 
	I0924 00:00:13.150112   26218 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 00:00:13.150219   26218 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 00:00:13.150313   26218 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 00:00:13.150324   26218 kubeadm.go:310] 
	I0924 00:00:13.150430   26218 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 00:00:13.150539   26218 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 00:00:13.150551   26218 kubeadm.go:310] 
	I0924 00:00:13.150661   26218 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a2tynl.1ohol4x4auhbv6gq \
	I0924 00:00:13.150808   26218 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 00:00:13.150846   26218 kubeadm.go:310] 	--control-plane 
	I0924 00:00:13.150856   26218 kubeadm.go:310] 
	I0924 00:00:13.150970   26218 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 00:00:13.150989   26218 kubeadm.go:310] 
	I0924 00:00:13.151100   26218 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a2tynl.1ohol4x4auhbv6gq \
	I0924 00:00:13.151239   26218 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 00:00:13.152162   26218 kubeadm.go:310] W0923 23:59:58.529397     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:00:13.152583   26218 kubeadm.go:310] W0923 23:59:58.530304     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:00:13.152731   26218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:00:13.152765   26218 cni.go:84] Creating CNI manager for ""
	I0924 00:00:13.152776   26218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 00:00:13.154438   26218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 00:00:13.155646   26218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 00:00:13.161171   26218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 00:00:13.161193   26218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 00:00:13.184460   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 00:00:13.668553   26218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 00:00:13.668646   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:00:13.668716   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539 minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=true
	I0924 00:00:13.906100   26218 ops.go:34] apiserver oom_adj: -16
	I0924 00:00:13.906236   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:00:14.026723   26218 kubeadm.go:1113] duration metric: took 358.135167ms to wait for elevateKubeSystemPrivileges
	I0924 00:00:14.026757   26218 kubeadm.go:394] duration metric: took 15.731103406s to StartCluster
	I0924 00:00:14.026778   26218 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:14.026862   26218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:00:14.027452   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:14.027658   26218 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:14.027668   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 00:00:14.027688   26218 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 00:00:14.027758   26218 addons.go:69] Setting storage-provisioner=true in profile "ha-959539"
	I0924 00:00:14.027782   26218 addons.go:234] Setting addon storage-provisioner=true in "ha-959539"
	I0924 00:00:14.027808   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:14.027677   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:00:14.027850   26218 addons.go:69] Setting default-storageclass=true in profile "ha-959539"
	I0924 00:00:14.027872   26218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-959539"
	I0924 00:00:14.027940   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:14.028248   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.028262   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.028289   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.028388   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.043826   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0924 00:00:14.043826   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0924 00:00:14.044412   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.044444   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.044897   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.044921   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.045026   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.045048   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.045272   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.045342   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.045440   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.045899   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.045941   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.047486   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:00:14.047712   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 00:00:14.048174   26218 cert_rotation.go:140] Starting client certificate rotation controller
	I0924 00:00:14.048284   26218 addons.go:234] Setting addon default-storageclass=true in "ha-959539"
	I0924 00:00:14.048319   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:14.048595   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.048634   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.062043   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0924 00:00:14.062493   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.063046   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.063070   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.063429   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.063717   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.064022   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0924 00:00:14.064526   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.064977   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.065001   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.065303   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.065800   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:14.065914   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.065960   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.067886   26218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:00:14.069203   26218 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:00:14.069223   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 00:00:14.069245   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:14.072558   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.072961   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:14.072982   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.073163   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:14.073338   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:14.073491   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:14.073620   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:14.082767   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0924 00:00:14.083265   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.083864   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.083889   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.084221   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.084481   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.086186   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:14.086413   26218 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 00:00:14.086430   26218 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 00:00:14.086447   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:14.089541   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.089980   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:14.090010   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.090151   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:14.090333   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:14.090551   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:14.090735   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:14.208938   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 00:00:14.243343   26218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:00:14.328202   26218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 00:00:14.719009   26218 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 00:00:15.026630   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.026666   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.026684   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.026706   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.026978   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027033   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027049   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.027059   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.027104   26218 main.go:141] libmachine: (ha-959539) DBG | Closing plugin on server side
	I0924 00:00:15.027152   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027174   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027183   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.027191   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.027272   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027294   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027390   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027404   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027434   26218 main.go:141] libmachine: (ha-959539) DBG | Closing plugin on server side
	I0924 00:00:15.027454   26218 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 00:00:15.027470   26218 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 00:00:15.027568   26218 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0924 00:00:15.027574   26218 round_trippers.go:469] Request Headers:
	I0924 00:00:15.027581   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:00:15.027585   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:00:15.042627   26218 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0924 00:00:15.043249   26218 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0924 00:00:15.043266   26218 round_trippers.go:469] Request Headers:
	I0924 00:00:15.043284   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:00:15.043295   26218 round_trippers.go:473]     Content-Type: application/json
	I0924 00:00:15.043300   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:00:15.047076   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:00:15.047250   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.047265   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.047499   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.047522   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.049462   26218 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 00:00:15.050768   26218 addons.go:510] duration metric: took 1.023080124s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 00:00:15.050804   26218 start.go:246] waiting for cluster config update ...
	I0924 00:00:15.050819   26218 start.go:255] writing updated cluster config ...
	I0924 00:00:15.052488   26218 out.go:201] 
	I0924 00:00:15.054069   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:15.054138   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:15.056020   26218 out.go:177] * Starting "ha-959539-m02" control-plane node in "ha-959539" cluster
	I0924 00:00:15.057275   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:00:15.057294   26218 cache.go:56] Caching tarball of preloaded images
	I0924 00:00:15.057386   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:00:15.057396   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:00:15.057456   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:15.057614   26218 start.go:360] acquireMachinesLock for ha-959539-m02: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:00:15.057654   26218 start.go:364] duration metric: took 22.109µs to acquireMachinesLock for "ha-959539-m02"
	I0924 00:00:15.057669   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:15.057726   26218 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0924 00:00:15.059302   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:00:15.059377   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:15.059408   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:15.074812   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0924 00:00:15.075196   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:15.075683   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:15.075703   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:15.076029   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:15.076222   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:15.076403   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:15.076562   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0924 00:00:15.076593   26218 client.go:168] LocalClient.Create starting
	I0924 00:00:15.076633   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:00:15.076673   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:00:15.076695   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:00:15.076755   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:00:15.076782   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:00:15.076796   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:00:15.076816   26218 main.go:141] libmachine: Running pre-create checks...
	I0924 00:00:15.076827   26218 main.go:141] libmachine: (ha-959539-m02) Calling .PreCreateCheck
	I0924 00:00:15.076957   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:15.077329   26218 main.go:141] libmachine: Creating machine...
	I0924 00:00:15.077346   26218 main.go:141] libmachine: (ha-959539-m02) Calling .Create
	I0924 00:00:15.077491   26218 main.go:141] libmachine: (ha-959539-m02) Creating KVM machine...
	I0924 00:00:15.078735   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found existing default KVM network
	I0924 00:00:15.078908   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found existing private KVM network mk-ha-959539
	I0924 00:00:15.079005   26218 main.go:141] libmachine: (ha-959539-m02) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 ...
	I0924 00:00:15.079050   26218 main.go:141] libmachine: (ha-959539-m02) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:00:15.079067   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.078949   26566 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:00:15.079117   26218 main.go:141] libmachine: (ha-959539-m02) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:00:15.323293   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.323139   26566 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa...
	I0924 00:00:15.574063   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.573935   26566 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/ha-959539-m02.rawdisk...
	I0924 00:00:15.574096   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Writing magic tar header
	I0924 00:00:15.574106   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Writing SSH key tar header
	I0924 00:00:15.574114   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.574047   26566 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 ...
	I0924 00:00:15.574234   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 (perms=drwx------)
	I0924 00:00:15.574263   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:00:15.574274   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02
	I0924 00:00:15.574301   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:00:15.574318   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:00:15.574331   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:00:15.574341   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:00:15.574351   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:00:15.574358   26218 main.go:141] libmachine: (ha-959539-m02) Creating domain...
	I0924 00:00:15.574368   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:00:15.574373   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:00:15.574383   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:00:15.574388   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:00:15.574397   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home
	I0924 00:00:15.574402   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Skipping /home - not owner
	I0924 00:00:15.575397   26218 main.go:141] libmachine: (ha-959539-m02) define libvirt domain using xml: 
	I0924 00:00:15.575418   26218 main.go:141] libmachine: (ha-959539-m02) <domain type='kvm'>
	I0924 00:00:15.575426   26218 main.go:141] libmachine: (ha-959539-m02)   <name>ha-959539-m02</name>
	I0924 00:00:15.575433   26218 main.go:141] libmachine: (ha-959539-m02)   <memory unit='MiB'>2200</memory>
	I0924 00:00:15.575441   26218 main.go:141] libmachine: (ha-959539-m02)   <vcpu>2</vcpu>
	I0924 00:00:15.575446   26218 main.go:141] libmachine: (ha-959539-m02)   <features>
	I0924 00:00:15.575454   26218 main.go:141] libmachine: (ha-959539-m02)     <acpi/>
	I0924 00:00:15.575461   26218 main.go:141] libmachine: (ha-959539-m02)     <apic/>
	I0924 00:00:15.575476   26218 main.go:141] libmachine: (ha-959539-m02)     <pae/>
	I0924 00:00:15.575486   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575497   26218 main.go:141] libmachine: (ha-959539-m02)   </features>
	I0924 00:00:15.575507   26218 main.go:141] libmachine: (ha-959539-m02)   <cpu mode='host-passthrough'>
	I0924 00:00:15.575514   26218 main.go:141] libmachine: (ha-959539-m02)   
	I0924 00:00:15.575526   26218 main.go:141] libmachine: (ha-959539-m02)   </cpu>
	I0924 00:00:15.575536   26218 main.go:141] libmachine: (ha-959539-m02)   <os>
	I0924 00:00:15.575543   26218 main.go:141] libmachine: (ha-959539-m02)     <type>hvm</type>
	I0924 00:00:15.575556   26218 main.go:141] libmachine: (ha-959539-m02)     <boot dev='cdrom'/>
	I0924 00:00:15.575573   26218 main.go:141] libmachine: (ha-959539-m02)     <boot dev='hd'/>
	I0924 00:00:15.575585   26218 main.go:141] libmachine: (ha-959539-m02)     <bootmenu enable='no'/>
	I0924 00:00:15.575595   26218 main.go:141] libmachine: (ha-959539-m02)   </os>
	I0924 00:00:15.575608   26218 main.go:141] libmachine: (ha-959539-m02)   <devices>
	I0924 00:00:15.575620   26218 main.go:141] libmachine: (ha-959539-m02)     <disk type='file' device='cdrom'>
	I0924 00:00:15.575642   26218 main.go:141] libmachine: (ha-959539-m02)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/boot2docker.iso'/>
	I0924 00:00:15.575655   26218 main.go:141] libmachine: (ha-959539-m02)       <target dev='hdc' bus='scsi'/>
	I0924 00:00:15.575665   26218 main.go:141] libmachine: (ha-959539-m02)       <readonly/>
	I0924 00:00:15.575675   26218 main.go:141] libmachine: (ha-959539-m02)     </disk>
	I0924 00:00:15.575691   26218 main.go:141] libmachine: (ha-959539-m02)     <disk type='file' device='disk'>
	I0924 00:00:15.575706   26218 main.go:141] libmachine: (ha-959539-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:00:15.575717   26218 main.go:141] libmachine: (ha-959539-m02)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/ha-959539-m02.rawdisk'/>
	I0924 00:00:15.575725   26218 main.go:141] libmachine: (ha-959539-m02)       <target dev='hda' bus='virtio'/>
	I0924 00:00:15.575732   26218 main.go:141] libmachine: (ha-959539-m02)     </disk>
	I0924 00:00:15.575744   26218 main.go:141] libmachine: (ha-959539-m02)     <interface type='network'>
	I0924 00:00:15.575752   26218 main.go:141] libmachine: (ha-959539-m02)       <source network='mk-ha-959539'/>
	I0924 00:00:15.575780   26218 main.go:141] libmachine: (ha-959539-m02)       <model type='virtio'/>
	I0924 00:00:15.575803   26218 main.go:141] libmachine: (ha-959539-m02)     </interface>
	I0924 00:00:15.575828   26218 main.go:141] libmachine: (ha-959539-m02)     <interface type='network'>
	I0924 00:00:15.575848   26218 main.go:141] libmachine: (ha-959539-m02)       <source network='default'/>
	I0924 00:00:15.575861   26218 main.go:141] libmachine: (ha-959539-m02)       <model type='virtio'/>
	I0924 00:00:15.575871   26218 main.go:141] libmachine: (ha-959539-m02)     </interface>
	I0924 00:00:15.575880   26218 main.go:141] libmachine: (ha-959539-m02)     <serial type='pty'>
	I0924 00:00:15.575890   26218 main.go:141] libmachine: (ha-959539-m02)       <target port='0'/>
	I0924 00:00:15.575898   26218 main.go:141] libmachine: (ha-959539-m02)     </serial>
	I0924 00:00:15.575907   26218 main.go:141] libmachine: (ha-959539-m02)     <console type='pty'>
	I0924 00:00:15.575916   26218 main.go:141] libmachine: (ha-959539-m02)       <target type='serial' port='0'/>
	I0924 00:00:15.575929   26218 main.go:141] libmachine: (ha-959539-m02)     </console>
	I0924 00:00:15.575941   26218 main.go:141] libmachine: (ha-959539-m02)     <rng model='virtio'>
	I0924 00:00:15.575953   26218 main.go:141] libmachine: (ha-959539-m02)       <backend model='random'>/dev/random</backend>
	I0924 00:00:15.575961   26218 main.go:141] libmachine: (ha-959539-m02)     </rng>
	I0924 00:00:15.575970   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575977   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575986   26218 main.go:141] libmachine: (ha-959539-m02)   </devices>
	I0924 00:00:15.575994   26218 main.go:141] libmachine: (ha-959539-m02) </domain>
	I0924 00:00:15.576006   26218 main.go:141] libmachine: (ha-959539-m02) 
	I0924 00:00:15.585706   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:4f:cb:25 in network default
	I0924 00:00:15.586358   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring networks are active...
	I0924 00:00:15.586382   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:15.588682   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring network default is active
	I0924 00:00:15.589090   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring network mk-ha-959539 is active
	I0924 00:00:15.589485   26218 main.go:141] libmachine: (ha-959539-m02) Getting domain xml...
	I0924 00:00:15.590356   26218 main.go:141] libmachine: (ha-959539-m02) Creating domain...
	I0924 00:00:16.876850   26218 main.go:141] libmachine: (ha-959539-m02) Waiting to get IP...
	I0924 00:00:16.877600   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:16.878025   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:16.878048   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:16.878002   26566 retry.go:31] will retry after 206.511357ms: waiting for machine to come up
	I0924 00:00:17.086726   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.087176   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.087210   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.087160   26566 retry.go:31] will retry after 339.485484ms: waiting for machine to come up
	I0924 00:00:17.428879   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.429496   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.429530   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.429442   26566 retry.go:31] will retry after 355.763587ms: waiting for machine to come up
	I0924 00:00:17.787147   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.787637   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.787665   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.787594   26566 retry.go:31] will retry after 608.491101ms: waiting for machine to come up
	I0924 00:00:18.397336   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:18.397814   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:18.397840   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:18.397785   26566 retry.go:31] will retry after 502.478814ms: waiting for machine to come up
	I0924 00:00:18.901642   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:18.902265   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:18.902291   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:18.902211   26566 retry.go:31] will retry after 818.203447ms: waiting for machine to come up
	I0924 00:00:19.722162   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:19.722608   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:19.722629   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:19.722558   26566 retry.go:31] will retry after 929.046384ms: waiting for machine to come up
	I0924 00:00:20.653489   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:20.653984   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:20.654008   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:20.653948   26566 retry.go:31] will retry after 1.409190678s: waiting for machine to come up
	I0924 00:00:22.065332   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:22.065896   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:22.065920   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:22.065833   26566 retry.go:31] will retry after 1.614499189s: waiting for machine to come up
	I0924 00:00:23.681862   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:23.682319   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:23.682363   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:23.682234   26566 retry.go:31] will retry after 1.460062243s: waiting for machine to come up
	I0924 00:00:25.144293   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:25.144745   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:25.144767   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:25.144697   26566 retry.go:31] will retry after 1.777929722s: waiting for machine to come up
	I0924 00:00:26.924735   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:26.925200   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:26.925237   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:26.925162   26566 retry.go:31] will retry after 3.141763872s: waiting for machine to come up
	I0924 00:00:30.069494   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:30.070014   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:30.070036   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:30.069955   26566 retry.go:31] will retry after 3.647403595s: waiting for machine to come up
	I0924 00:00:33.721303   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:33.721786   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:33.721804   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:33.721753   26566 retry.go:31] will retry after 4.027076232s: waiting for machine to come up
	I0924 00:00:37.752592   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.753064   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has current primary IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.753095   26218 main.go:141] libmachine: (ha-959539-m02) Found IP for machine: 192.168.39.71
	I0924 00:00:37.753104   26218 main.go:141] libmachine: (ha-959539-m02) Reserving static IP address...
	I0924 00:00:37.753574   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find host DHCP lease matching {name: "ha-959539-m02", mac: "52:54:00:7e:17:08", ip: "192.168.39.71"} in network mk-ha-959539
	I0924 00:00:37.827442   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Getting to WaitForSSH function...
	I0924 00:00:37.827474   26218 main.go:141] libmachine: (ha-959539-m02) Reserved static IP address: 192.168.39.71
	I0924 00:00:37.827486   26218 main.go:141] libmachine: (ha-959539-m02) Waiting for SSH to be available...
	I0924 00:00:37.830110   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.830505   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:37.830530   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.830672   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using SSH client type: external
	I0924 00:00:37.830710   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa (-rw-------)
	I0924 00:00:37.830778   26218 main.go:141] libmachine: (ha-959539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:00:37.830803   26218 main.go:141] libmachine: (ha-959539-m02) DBG | About to run SSH command:
	I0924 00:00:37.830826   26218 main.go:141] libmachine: (ha-959539-m02) DBG | exit 0
	I0924 00:00:37.960544   26218 main.go:141] libmachine: (ha-959539-m02) DBG | SSH cmd err, output: <nil>: 
	I0924 00:00:37.960821   26218 main.go:141] libmachine: (ha-959539-m02) KVM machine creation complete!
	I0924 00:00:37.961319   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:37.961983   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:37.962222   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:37.962419   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:00:37.962460   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetState
	I0924 00:00:37.963697   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:00:37.963714   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:00:37.963734   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:00:37.963742   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:37.966078   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.966462   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:37.966483   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.966660   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:37.966813   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:37.966945   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:37.967054   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:37.967205   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:37.967481   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:37.967492   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:00:38.079589   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:00:38.079610   26218 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:00:38.079617   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.082503   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.082929   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.082950   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.083140   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.083340   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.083509   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.083666   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.083825   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.083986   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.083997   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:00:38.197000   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:00:38.197103   26218 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:00:38.197116   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:00:38.197126   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.197376   26218 buildroot.go:166] provisioning hostname "ha-959539-m02"
	I0924 00:00:38.197411   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.197604   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.200444   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.200771   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.200795   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.200984   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.201176   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.201357   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.201493   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.201648   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.201800   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.201815   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539-m02 && echo "ha-959539-m02" | sudo tee /etc/hostname
	I0924 00:00:38.325460   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539-m02
	
	I0924 00:00:38.325485   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.328105   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.328475   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.328501   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.328664   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.328838   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.329112   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.329333   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.329513   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.329688   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.329704   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:00:38.449811   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:00:38.449850   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:00:38.449870   26218 buildroot.go:174] setting up certificates
	I0924 00:00:38.449890   26218 provision.go:84] configureAuth start
	I0924 00:00:38.449902   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.450206   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:38.453211   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.453603   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.453632   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.453799   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.456450   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.456868   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.456897   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.457045   26218 provision.go:143] copyHostCerts
	I0924 00:00:38.457081   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:00:38.457120   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:00:38.457131   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:00:38.457206   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:00:38.457299   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:00:38.457319   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:00:38.457327   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:00:38.457353   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:00:38.457401   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:00:38.457420   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:00:38.457427   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:00:38.457450   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:00:38.457543   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539-m02 san=[127.0.0.1 192.168.39.71 ha-959539-m02 localhost minikube]
	I0924 00:00:38.700010   26218 provision.go:177] copyRemoteCerts
	I0924 00:00:38.700077   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:00:38.700106   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.703047   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.703677   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.703706   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.703938   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.704136   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.704273   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.704412   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:38.790480   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:00:38.790557   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:00:38.814753   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:00:38.814837   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 00:00:38.838252   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:00:38.838325   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 00:00:38.861203   26218 provision.go:87] duration metric: took 411.299288ms to configureAuth
	I0924 00:00:38.861229   26218 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:00:38.861474   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:38.861569   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.864432   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.864889   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.864918   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.865150   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.865356   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.865560   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.865731   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.865903   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.866055   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.866068   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:00:39.108025   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:00:39.108048   26218 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:00:39.108055   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetURL
	I0924 00:00:39.109415   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using libvirt version 6000000
	I0924 00:00:39.111778   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.112117   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.112136   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.112442   26218 main.go:141] libmachine: Docker is up and running!
	I0924 00:00:39.112459   26218 main.go:141] libmachine: Reticulating splines...
	I0924 00:00:39.112465   26218 client.go:171] duration metric: took 24.035864378s to LocalClient.Create
	I0924 00:00:39.112488   26218 start.go:167] duration metric: took 24.035928123s to libmachine.API.Create "ha-959539"
	I0924 00:00:39.112505   26218 start.go:293] postStartSetup for "ha-959539-m02" (driver="kvm2")
	I0924 00:00:39.112530   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:00:39.112552   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.112758   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:00:39.112780   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.115333   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.115725   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.115753   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.115918   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.116088   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.116213   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.116357   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.202485   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:00:39.206952   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:00:39.206985   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:00:39.207071   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:00:39.207148   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:00:39.207163   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:00:39.207242   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:00:39.216574   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:00:39.239506   26218 start.go:296] duration metric: took 126.985038ms for postStartSetup
	I0924 00:00:39.239558   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:39.240153   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:39.242816   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.243178   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.243207   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.243507   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:39.243767   26218 start.go:128] duration metric: took 24.186030679s to createHost
	I0924 00:00:39.243797   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.246320   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.246794   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.246819   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.246947   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.247124   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.247283   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.247416   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.247561   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:39.247714   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:39.247724   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:00:39.360845   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136039.320054599
	
	I0924 00:00:39.360875   26218 fix.go:216] guest clock: 1727136039.320054599
	I0924 00:00:39.360884   26218 fix.go:229] Guest: 2024-09-24 00:00:39.320054599 +0000 UTC Remote: 2024-09-24 00:00:39.243782701 +0000 UTC m=+72.471728258 (delta=76.271898ms)
	I0924 00:00:39.360910   26218 fix.go:200] guest clock delta is within tolerance: 76.271898ms
	I0924 00:00:39.360916   26218 start.go:83] releasing machines lock for "ha-959539-m02", held for 24.303253954s
	I0924 00:00:39.360955   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.361201   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:39.363900   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.364402   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.364444   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.366881   26218 out.go:177] * Found network options:
	I0924 00:00:39.368856   26218 out.go:177]   - NO_PROXY=192.168.39.231
	W0924 00:00:39.370661   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:00:39.370699   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371263   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371455   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371538   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:00:39.371594   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	W0924 00:00:39.371611   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:00:39.371685   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:00:39.371706   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.374357   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374663   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374694   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.374712   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374850   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.375045   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.375085   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.375111   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.375202   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.375362   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.375377   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.375561   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.375696   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.375813   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.627921   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:00:39.633495   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:00:39.633553   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:00:39.648951   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:00:39.648983   26218 start.go:495] detecting cgroup driver to use...
	I0924 00:00:39.649040   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:00:39.665083   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:00:39.679257   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:00:39.679308   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:00:39.692687   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:00:39.705979   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:00:39.817630   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:00:39.947466   26218 docker.go:233] disabling docker service ...
	I0924 00:00:39.947532   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:00:39.969264   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:00:39.982704   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:00:40.112775   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:00:40.227163   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:00:40.240677   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:00:40.258433   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:00:40.258483   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.268957   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:00:40.269028   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.279413   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.289512   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.299715   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:00:40.310010   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.320219   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.336748   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.346864   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:00:40.355761   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:00:40.355825   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:00:40.368724   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:00:40.378522   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:00:40.486107   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:00:40.577907   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:00:40.577981   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:00:40.582555   26218 start.go:563] Will wait 60s for crictl version
	I0924 00:00:40.582622   26218 ssh_runner.go:195] Run: which crictl
	I0924 00:00:40.586219   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:00:40.622719   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:00:40.622812   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:00:40.650450   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:00:40.681082   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:00:40.682576   26218 out.go:177]   - env NO_PROXY=192.168.39.231
	I0924 00:00:40.683809   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:40.686666   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:40.687065   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:40.687087   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:40.687306   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:00:40.691475   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:00:40.703474   26218 mustload.go:65] Loading cluster: ha-959539
	I0924 00:00:40.703695   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:40.703966   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:40.704003   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:40.718859   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0924 00:00:40.719296   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:40.719825   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:40.719845   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:40.720145   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:40.720370   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:40.721815   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:40.722094   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:40.722128   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:40.736945   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0924 00:00:40.737421   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:40.737905   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:40.737924   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:40.738222   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:40.738511   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:40.738689   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.71
	I0924 00:00:40.738704   26218 certs.go:194] generating shared ca certs ...
	I0924 00:00:40.738719   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:40.738861   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:00:40.738903   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:00:40.738915   26218 certs.go:256] generating profile certs ...
	I0924 00:00:40.738991   26218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:00:40.739018   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0
	I0924 00:00:40.739035   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.254]
	I0924 00:00:41.143984   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 ...
	I0924 00:00:41.144014   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0: {Name:mk20b6843b0401b0c56e7890c984fa68d261314f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:41.144175   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0 ...
	I0924 00:00:41.144188   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0: {Name:mk7575fb7ddfde936c86d46545e958478f16edb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:41.144260   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:00:41.144430   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:00:41.144555   26218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:00:41.144571   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:00:41.144584   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:00:41.144594   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:00:41.144605   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:00:41.144615   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:00:41.144625   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:00:41.144635   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:00:41.144645   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:00:41.144688   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:00:41.144720   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:00:41.144729   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:00:41.144749   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:00:41.144772   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:00:41.144793   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:00:41.144829   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:00:41.144853   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.144868   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.144880   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.144915   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:41.148030   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:41.148427   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:41.148454   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:41.148614   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:41.148808   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:41.149000   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:41.149135   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:41.228803   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 00:00:41.233988   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 00:00:41.244943   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 00:00:41.249126   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0924 00:00:41.259697   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 00:00:41.263836   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 00:00:41.275144   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 00:00:41.279454   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 00:00:41.290396   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 00:00:41.295094   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 00:00:41.307082   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 00:00:41.310877   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 00:00:41.325438   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:00:41.350629   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:00:41.374907   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:00:41.399716   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:00:41.424061   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 00:00:41.447992   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:00:41.471662   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:00:41.494955   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:00:41.517872   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:00:41.540286   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:00:41.563177   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:00:41.585906   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 00:00:41.601283   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0924 00:00:41.617635   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 00:00:41.633218   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 00:00:41.648995   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 00:00:41.664675   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 00:00:41.680596   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 00:00:41.696250   26218 ssh_runner.go:195] Run: openssl version
	I0924 00:00:41.701694   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:00:41.711789   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.716030   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.716101   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.721933   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:00:41.732158   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:00:41.742443   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.746788   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.746839   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.752121   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:00:41.763012   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:00:41.774793   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.779310   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.779366   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.784990   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:00:41.795333   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:00:41.799293   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:00:41.799344   26218 kubeadm.go:934] updating node {m02 192.168.39.71 8443 v1.31.1 crio true true} ...
	I0924 00:00:41.799409   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:00:41.799432   26218 kube-vip.go:115] generating kube-vip config ...
	I0924 00:00:41.799464   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:00:41.816587   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:00:41.816663   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:00:41.816743   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:00:41.827548   26218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 00:00:41.827613   26218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 00:00:41.837289   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 00:00:41.837325   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:00:41.837335   26218 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0924 00:00:41.837374   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:00:41.837335   26218 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0924 00:00:41.841429   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 00:00:41.841451   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 00:00:42.671785   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:00:42.671868   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:00:42.676727   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 00:00:42.676769   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 00:00:42.782086   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:00:42.829038   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:00:42.829147   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:00:42.840769   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 00:00:42.840809   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 00:00:43.263339   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 00:00:43.276175   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 00:00:43.295973   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:00:43.314983   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:00:43.331751   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:00:43.335923   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:00:43.347682   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:00:43.465742   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:00:43.485298   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:43.485784   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:43.485844   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:43.501576   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0924 00:00:43.502143   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:43.502637   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:43.502661   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:43.502992   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:43.503177   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:43.503343   26218 start.go:317] joinCluster: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:00:43.503440   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 00:00:43.503454   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:43.506923   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:43.507450   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:43.507479   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:43.507654   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:43.507814   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:43.507940   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:43.508061   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:43.662724   26218 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:43.662763   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pid2mx.knnb3pqsxosow7jx --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m02 --control-plane --apiserver-advertise-address=192.168.39.71 --apiserver-bind-port=8443"
	I0924 00:01:07.367829   26218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pid2mx.knnb3pqsxosow7jx --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m02 --control-plane --apiserver-advertise-address=192.168.39.71 --apiserver-bind-port=8443": (23.705046169s)
	I0924 00:01:07.367865   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 00:01:07.953375   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539-m02 minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=false
	I0924 00:01:08.091888   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-959539-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 00:01:08.215534   26218 start.go:319] duration metric: took 24.71218473s to joinCluster
	I0924 00:01:08.215627   26218 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:01:08.215925   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:01:08.218104   26218 out.go:177] * Verifying Kubernetes components...
	I0924 00:01:08.219304   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:01:08.515326   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:01:08.536625   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:01:08.536894   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 00:01:08.536951   26218 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.231:8443
	I0924 00:01:08.537167   26218 node_ready.go:35] waiting up to 6m0s for node "ha-959539-m02" to be "Ready" ...
	I0924 00:01:08.537285   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:08.537301   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:08.537312   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:08.537318   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:08.545839   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:01:09.037697   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:09.037724   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:09.037735   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:09.037744   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:09.045511   26218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0924 00:01:09.538147   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:09.538175   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:09.538188   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:09.538195   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:09.545313   26218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0924 00:01:10.038238   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:10.038262   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:10.038270   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:10.038274   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:10.041715   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:10.538175   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:10.538205   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:10.538219   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:10.538224   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:10.541872   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:10.542370   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:11.037630   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:11.037679   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:11.037691   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:11.037696   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:11.041245   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:11.538259   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:11.538294   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:11.538302   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:11.538307   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:11.541611   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:12.038188   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:12.038209   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:12.038216   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:12.038221   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:12.041674   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:12.537618   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:12.537637   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:12.537645   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:12.537655   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:12.541319   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:13.037995   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:13.038016   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:13.038025   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:13.038028   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:13.041345   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:13.042019   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:13.537769   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:13.537794   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:13.537805   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:13.537811   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:13.541685   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:14.037855   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:14.037878   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:14.037887   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:14.037891   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:14.288753   26218 round_trippers.go:574] Response Status: 200 OK in 250 milliseconds
	I0924 00:01:14.538102   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:14.538126   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:14.538137   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:14.538145   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:14.541469   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.037484   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:15.037516   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:15.037537   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:15.037541   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:15.040833   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.537646   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:15.537676   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:15.537694   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:15.537700   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:15.541088   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.541719   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:16.037867   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:16.037898   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:16.037910   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:16.037916   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:16.041934   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:16.537983   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:16.538008   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:16.538018   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:16.538026   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:16.542888   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:17.037795   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:17.037815   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:17.037823   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:17.037826   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:17.040833   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:17.537691   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:17.537714   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:17.537721   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:17.537727   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:17.540858   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:18.037970   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:18.037995   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:18.038031   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:18.038036   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:18.041329   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:18.042104   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:18.537909   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:18.537934   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:18.537947   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:18.537953   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:18.541524   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:19.037353   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:19.037406   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:19.037417   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:19.037421   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:19.040693   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:19.537691   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:19.537713   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:19.537721   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:19.537725   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:19.541362   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:20.038258   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:20.038281   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:20.038289   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:20.038293   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:20.041505   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:20.042205   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:20.538173   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:20.538196   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:20.538204   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:20.538208   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:20.541444   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:21.038308   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:21.038332   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:21.038340   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:21.038345   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:21.041591   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:21.537466   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:21.537490   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:21.537498   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:21.537507   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:21.541243   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:22.037776   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:22.037798   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:22.037806   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:22.037809   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:22.041584   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:22.537387   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:22.537410   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:22.537419   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:22.537423   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:22.540436   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:22.540915   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:23.038376   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:23.038396   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:23.038404   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:23.038408   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:23.042386   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:23.537841   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:23.537863   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:23.537871   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:23.537876   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:23.540735   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:24.037766   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:24.037791   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:24.037800   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:24.037805   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:24.041574   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:24.537636   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:24.537662   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:24.537674   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:24.537679   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:24.540714   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:24.541302   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:25.037447   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:25.037470   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:25.037487   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:25.037491   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:25.040959   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:25.538316   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:25.538358   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:25.538366   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:25.538370   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:25.542089   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.037942   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:26.037965   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:26.037972   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:26.037977   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:26.041187   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.538316   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:26.538337   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:26.538344   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:26.538347   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:26.541682   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.542279   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:27.037486   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.037511   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.037519   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.037523   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.040661   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.041287   26218 node_ready.go:49] node "ha-959539-m02" has status "Ready":"True"
	I0924 00:01:27.041311   26218 node_ready.go:38] duration metric: took 18.504110454s for node "ha-959539-m02" to be "Ready" ...
	I0924 00:01:27.041320   26218 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:01:27.041412   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:27.041422   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.041429   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.041433   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.045587   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:27.053524   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.053610   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nkbzw
	I0924 00:01:27.053618   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.053626   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.053630   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.056737   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.057414   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.057431   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.057440   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.057448   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.059974   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.060671   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.060693   26218 pod_ready.go:82] duration metric: took 7.143278ms for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.060705   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.060770   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ss8lg
	I0924 00:01:27.060779   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.060786   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.060789   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.063296   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.064025   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.064042   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.064052   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.064057   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.066509   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.067043   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.067072   26218 pod_ready.go:82] duration metric: took 6.358417ms for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.067085   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.067169   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539
	I0924 00:01:27.067180   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.067191   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.067197   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.069632   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.070349   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.070365   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.070372   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.070376   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.072726   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.073202   26218 pod_ready.go:93] pod "etcd-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.073221   26218 pod_ready.go:82] duration metric: took 6.128232ms for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.073233   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.073304   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m02
	I0924 00:01:27.073314   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.073325   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.073334   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.075606   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.076170   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.076186   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.076196   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.076203   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.078974   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.079404   26218 pod_ready.go:93] pod "etcd-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.079423   26218 pod_ready.go:82] duration metric: took 6.178632ms for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.079441   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.237846   26218 request.go:632] Waited for 158.344773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:01:27.237906   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:01:27.237912   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.237919   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.237923   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.241325   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.438393   26218 request.go:632] Waited for 196.447833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.438479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.438489   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.438501   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.438509   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.447385   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:01:27.447843   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.447861   26218 pod_ready.go:82] duration metric: took 368.411985ms for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.447873   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.638213   26218 request.go:632] Waited for 190.264015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:01:27.638314   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:01:27.638323   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.638331   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.638335   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.641724   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.837671   26218 request.go:632] Waited for 195.307183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.837734   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.837741   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.837750   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.837755   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.841548   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.842107   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.842125   26218 pod_ready.go:82] duration metric: took 394.244431ms for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.842138   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.038308   26218 request.go:632] Waited for 196.100963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:01:28.038387   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:01:28.038399   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.038408   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.038413   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.041906   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.238014   26218 request.go:632] Waited for 195.403449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:28.238083   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:28.238090   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.238099   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.238104   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.241379   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.241947   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:28.241968   26218 pod_ready.go:82] duration metric: took 399.822644ms for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.241981   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.438107   26218 request.go:632] Waited for 196.054162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:01:28.438177   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:01:28.438183   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.438190   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.438194   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.441695   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.637747   26218 request.go:632] Waited for 195.402574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:28.637812   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:28.637820   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.637829   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.637836   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.641728   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.642165   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:28.642185   26218 pod_ready.go:82] duration metric: took 400.196003ms for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.642198   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.838364   26218 request.go:632] Waited for 196.098536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:01:28.838423   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:01:28.838429   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.838440   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.838445   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.842064   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.038288   26218 request.go:632] Waited for 195.408876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:29.038362   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:29.038367   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.038375   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.038380   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.041612   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.042184   26218 pod_ready.go:93] pod "kube-proxy-2hlqx" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.042207   26218 pod_ready.go:82] duration metric: took 400.003061ms for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.042217   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.238379   26218 request.go:632] Waited for 196.098313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:01:29.238479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:01:29.238489   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.238500   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.238510   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.241789   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.437898   26218 request.go:632] Waited for 195.388277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.437950   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.437962   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.437970   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.437982   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.441497   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.442152   26218 pod_ready.go:93] pod "kube-proxy-qzklc" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.442170   26218 pod_ready.go:82] duration metric: took 399.946814ms for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.442179   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.638206   26218 request.go:632] Waited for 195.95793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:01:29.638276   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:01:29.638285   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.638295   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.638300   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.641784   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.837816   26218 request.go:632] Waited for 195.394257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.837907   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.837916   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.837926   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.837932   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.841128   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.841709   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.841729   26218 pod_ready.go:82] duration metric: took 399.544232ms for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.841739   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:30.037891   26218 request.go:632] Waited for 196.07048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:01:30.037962   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:01:30.037970   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.037980   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.037987   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.041465   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:30.237753   26218 request.go:632] Waited for 195.552862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:30.237806   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:30.237812   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.237819   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.237823   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.240960   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:30.241506   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:30.241525   26218 pod_ready.go:82] duration metric: took 399.780224ms for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:30.241536   26218 pod_ready.go:39] duration metric: took 3.200205293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:01:30.241549   26218 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:01:30.241608   26218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:01:30.261278   26218 api_server.go:72] duration metric: took 22.045614649s to wait for apiserver process to appear ...
	I0924 00:01:30.261301   26218 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:01:30.261325   26218 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0924 00:01:30.266130   26218 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0924 00:01:30.266207   26218 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I0924 00:01:30.266217   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.266227   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.266234   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.267131   26218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 00:01:30.267273   26218 api_server.go:141] control plane version: v1.31.1
	I0924 00:01:30.267296   26218 api_server.go:131] duration metric: took 5.986583ms to wait for apiserver health ...
	I0924 00:01:30.267305   26218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:01:30.437651   26218 request.go:632] Waited for 170.278154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.437728   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.437734   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.437752   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.437756   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.443228   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:01:30.447360   26218 system_pods.go:59] 17 kube-system pods found
	I0924 00:01:30.447395   26218 system_pods.go:61] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:01:30.447400   26218 system_pods.go:61] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:01:30.447404   26218 system_pods.go:61] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:01:30.447407   26218 system_pods.go:61] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:01:30.447410   26218 system_pods.go:61] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:01:30.447413   26218 system_pods.go:61] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:01:30.447417   26218 system_pods.go:61] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:01:30.447420   26218 system_pods.go:61] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:01:30.447422   26218 system_pods.go:61] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:01:30.447427   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:01:30.447430   26218 system_pods.go:61] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:01:30.447433   26218 system_pods.go:61] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:01:30.447436   26218 system_pods.go:61] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:01:30.447439   26218 system_pods.go:61] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:01:30.447442   26218 system_pods.go:61] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:01:30.447445   26218 system_pods.go:61] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:01:30.447448   26218 system_pods.go:61] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:01:30.447453   26218 system_pods.go:74] duration metric: took 180.140131ms to wait for pod list to return data ...
	I0924 00:01:30.447461   26218 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:01:30.637950   26218 request.go:632] Waited for 190.394034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:01:30.638006   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:01:30.638012   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.638022   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.638028   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.642084   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:30.642345   26218 default_sa.go:45] found service account: "default"
	I0924 00:01:30.642362   26218 default_sa.go:55] duration metric: took 194.895557ms for default service account to be created ...
	I0924 00:01:30.642370   26218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:01:30.838482   26218 request.go:632] Waited for 196.04318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.838565   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.838573   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.838585   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.838597   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.842832   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:30.848939   26218 system_pods.go:86] 17 kube-system pods found
	I0924 00:01:30.848970   26218 system_pods.go:89] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:01:30.848979   26218 system_pods.go:89] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:01:30.848983   26218 system_pods.go:89] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:01:30.848988   26218 system_pods.go:89] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:01:30.848991   26218 system_pods.go:89] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:01:30.848995   26218 system_pods.go:89] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:01:30.848999   26218 system_pods.go:89] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:01:30.849002   26218 system_pods.go:89] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:01:30.849006   26218 system_pods.go:89] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:01:30.849009   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:01:30.849014   26218 system_pods.go:89] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:01:30.849019   26218 system_pods.go:89] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:01:30.849023   26218 system_pods.go:89] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:01:30.849027   26218 system_pods.go:89] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:01:30.849031   26218 system_pods.go:89] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:01:30.849034   26218 system_pods.go:89] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:01:30.849039   26218 system_pods.go:89] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:01:30.849049   26218 system_pods.go:126] duration metric: took 206.674401ms to wait for k8s-apps to be running ...
	I0924 00:01:30.849059   26218 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:01:30.849103   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:01:30.865711   26218 system_svc.go:56] duration metric: took 16.641461ms WaitForService to wait for kubelet
	I0924 00:01:30.865749   26218 kubeadm.go:582] duration metric: took 22.650087813s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:01:30.865771   26218 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:01:31.038193   26218 request.go:632] Waited for 172.328437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I0924 00:01:31.038258   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I0924 00:01:31.038266   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:31.038277   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:31.038283   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:31.042103   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:31.042950   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:01:31.042977   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:01:31.042995   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:01:31.042998   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:01:31.043002   26218 node_conditions.go:105] duration metric: took 177.226673ms to run NodePressure ...
	I0924 00:01:31.043015   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:01:31.043037   26218 start.go:255] writing updated cluster config ...
	I0924 00:01:31.044981   26218 out.go:201] 
	I0924 00:01:31.046376   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:01:31.046461   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:01:31.048054   26218 out.go:177] * Starting "ha-959539-m03" control-plane node in "ha-959539" cluster
	I0924 00:01:31.049402   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:01:31.049432   26218 cache.go:56] Caching tarball of preloaded images
	I0924 00:01:31.049548   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:01:31.049578   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:01:31.049684   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:01:31.049896   26218 start.go:360] acquireMachinesLock for ha-959539-m03: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:01:31.049951   26218 start.go:364] duration metric: took 34.777µs to acquireMachinesLock for "ha-959539-m03"
	I0924 00:01:31.049975   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:01:31.050075   26218 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0924 00:01:31.051498   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:01:31.051601   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:01:31.051641   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:01:31.066868   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0924 00:01:31.067407   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:01:31.067856   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:01:31.067875   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:01:31.068226   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:01:31.068427   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:31.068578   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:31.068733   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0924 00:01:31.068760   26218 client.go:168] LocalClient.Create starting
	I0924 00:01:31.068788   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:01:31.068825   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:01:31.068839   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:01:31.068884   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:01:31.068903   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:01:31.068913   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:01:31.068925   26218 main.go:141] libmachine: Running pre-create checks...
	I0924 00:01:31.068932   26218 main.go:141] libmachine: (ha-959539-m03) Calling .PreCreateCheck
	I0924 00:01:31.069147   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:01:31.069509   26218 main.go:141] libmachine: Creating machine...
	I0924 00:01:31.069521   26218 main.go:141] libmachine: (ha-959539-m03) Calling .Create
	I0924 00:01:31.069666   26218 main.go:141] libmachine: (ha-959539-m03) Creating KVM machine...
	I0924 00:01:31.071131   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found existing default KVM network
	I0924 00:01:31.071307   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found existing private KVM network mk-ha-959539
	I0924 00:01:31.071526   26218 main.go:141] libmachine: (ha-959539-m03) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 ...
	I0924 00:01:31.071549   26218 main.go:141] libmachine: (ha-959539-m03) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:01:31.071644   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.071506   26982 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:01:31.071719   26218 main.go:141] libmachine: (ha-959539-m03) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:01:31.300380   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.300219   26982 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa...
	I0924 00:01:31.604410   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.604272   26982 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/ha-959539-m03.rawdisk...
	I0924 00:01:31.604443   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Writing magic tar header
	I0924 00:01:31.604464   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Writing SSH key tar header
	I0924 00:01:31.604477   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.604403   26982 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 ...
	I0924 00:01:31.604563   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03
	I0924 00:01:31.604595   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 (perms=drwx------)
	I0924 00:01:31.604614   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:01:31.604630   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:01:31.604641   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:01:31.604654   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:01:31.604668   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:01:31.604679   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:01:31.604689   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home
	I0924 00:01:31.604701   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:01:31.604718   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:01:31.604730   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:01:31.604746   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Skipping /home - not owner
	I0924 00:01:31.604758   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:01:31.604771   26218 main.go:141] libmachine: (ha-959539-m03) Creating domain...
	I0924 00:01:31.605736   26218 main.go:141] libmachine: (ha-959539-m03) define libvirt domain using xml: 
	I0924 00:01:31.605756   26218 main.go:141] libmachine: (ha-959539-m03) <domain type='kvm'>
	I0924 00:01:31.605766   26218 main.go:141] libmachine: (ha-959539-m03)   <name>ha-959539-m03</name>
	I0924 00:01:31.605777   26218 main.go:141] libmachine: (ha-959539-m03)   <memory unit='MiB'>2200</memory>
	I0924 00:01:31.605784   26218 main.go:141] libmachine: (ha-959539-m03)   <vcpu>2</vcpu>
	I0924 00:01:31.605794   26218 main.go:141] libmachine: (ha-959539-m03)   <features>
	I0924 00:01:31.605802   26218 main.go:141] libmachine: (ha-959539-m03)     <acpi/>
	I0924 00:01:31.605808   26218 main.go:141] libmachine: (ha-959539-m03)     <apic/>
	I0924 00:01:31.605816   26218 main.go:141] libmachine: (ha-959539-m03)     <pae/>
	I0924 00:01:31.605822   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.605829   26218 main.go:141] libmachine: (ha-959539-m03)   </features>
	I0924 00:01:31.605840   26218 main.go:141] libmachine: (ha-959539-m03)   <cpu mode='host-passthrough'>
	I0924 00:01:31.605848   26218 main.go:141] libmachine: (ha-959539-m03)   
	I0924 00:01:31.605857   26218 main.go:141] libmachine: (ha-959539-m03)   </cpu>
	I0924 00:01:31.605887   26218 main.go:141] libmachine: (ha-959539-m03)   <os>
	I0924 00:01:31.605911   26218 main.go:141] libmachine: (ha-959539-m03)     <type>hvm</type>
	I0924 00:01:31.605921   26218 main.go:141] libmachine: (ha-959539-m03)     <boot dev='cdrom'/>
	I0924 00:01:31.605928   26218 main.go:141] libmachine: (ha-959539-m03)     <boot dev='hd'/>
	I0924 00:01:31.605940   26218 main.go:141] libmachine: (ha-959539-m03)     <bootmenu enable='no'/>
	I0924 00:01:31.605950   26218 main.go:141] libmachine: (ha-959539-m03)   </os>
	I0924 00:01:31.605957   26218 main.go:141] libmachine: (ha-959539-m03)   <devices>
	I0924 00:01:31.605968   26218 main.go:141] libmachine: (ha-959539-m03)     <disk type='file' device='cdrom'>
	I0924 00:01:31.605980   26218 main.go:141] libmachine: (ha-959539-m03)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/boot2docker.iso'/>
	I0924 00:01:31.606000   26218 main.go:141] libmachine: (ha-959539-m03)       <target dev='hdc' bus='scsi'/>
	I0924 00:01:31.606012   26218 main.go:141] libmachine: (ha-959539-m03)       <readonly/>
	I0924 00:01:31.606020   26218 main.go:141] libmachine: (ha-959539-m03)     </disk>
	I0924 00:01:31.606029   26218 main.go:141] libmachine: (ha-959539-m03)     <disk type='file' device='disk'>
	I0924 00:01:31.606038   26218 main.go:141] libmachine: (ha-959539-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:01:31.606049   26218 main.go:141] libmachine: (ha-959539-m03)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/ha-959539-m03.rawdisk'/>
	I0924 00:01:31.606056   26218 main.go:141] libmachine: (ha-959539-m03)       <target dev='hda' bus='virtio'/>
	I0924 00:01:31.606063   26218 main.go:141] libmachine: (ha-959539-m03)     </disk>
	I0924 00:01:31.606074   26218 main.go:141] libmachine: (ha-959539-m03)     <interface type='network'>
	I0924 00:01:31.606086   26218 main.go:141] libmachine: (ha-959539-m03)       <source network='mk-ha-959539'/>
	I0924 00:01:31.606092   26218 main.go:141] libmachine: (ha-959539-m03)       <model type='virtio'/>
	I0924 00:01:31.606103   26218 main.go:141] libmachine: (ha-959539-m03)     </interface>
	I0924 00:01:31.606118   26218 main.go:141] libmachine: (ha-959539-m03)     <interface type='network'>
	I0924 00:01:31.606130   26218 main.go:141] libmachine: (ha-959539-m03)       <source network='default'/>
	I0924 00:01:31.606140   26218 main.go:141] libmachine: (ha-959539-m03)       <model type='virtio'/>
	I0924 00:01:31.606179   26218 main.go:141] libmachine: (ha-959539-m03)     </interface>
	I0924 00:01:31.606200   26218 main.go:141] libmachine: (ha-959539-m03)     <serial type='pty'>
	I0924 00:01:31.606212   26218 main.go:141] libmachine: (ha-959539-m03)       <target port='0'/>
	I0924 00:01:31.606222   26218 main.go:141] libmachine: (ha-959539-m03)     </serial>
	I0924 00:01:31.606234   26218 main.go:141] libmachine: (ha-959539-m03)     <console type='pty'>
	I0924 00:01:31.606244   26218 main.go:141] libmachine: (ha-959539-m03)       <target type='serial' port='0'/>
	I0924 00:01:31.606252   26218 main.go:141] libmachine: (ha-959539-m03)     </console>
	I0924 00:01:31.606259   26218 main.go:141] libmachine: (ha-959539-m03)     <rng model='virtio'>
	I0924 00:01:31.606268   26218 main.go:141] libmachine: (ha-959539-m03)       <backend model='random'>/dev/random</backend>
	I0924 00:01:31.606286   26218 main.go:141] libmachine: (ha-959539-m03)     </rng>
	I0924 00:01:31.606292   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.606297   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.606304   26218 main.go:141] libmachine: (ha-959539-m03)   </devices>
	I0924 00:01:31.606310   26218 main.go:141] libmachine: (ha-959539-m03) </domain>
	I0924 00:01:31.606319   26218 main.go:141] libmachine: (ha-959539-m03) 
	I0924 00:01:31.613294   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:e5:53:3a in network default
	I0924 00:01:31.613858   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring networks are active...
	I0924 00:01:31.613884   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:31.614594   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring network default is active
	I0924 00:01:31.614852   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring network mk-ha-959539 is active
	I0924 00:01:31.615281   26218 main.go:141] libmachine: (ha-959539-m03) Getting domain xml...
	I0924 00:01:31.616154   26218 main.go:141] libmachine: (ha-959539-m03) Creating domain...
	I0924 00:01:32.869701   26218 main.go:141] libmachine: (ha-959539-m03) Waiting to get IP...
	I0924 00:01:32.870597   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:32.871006   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:32.871035   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:32.870993   26982 retry.go:31] will retry after 233.012319ms: waiting for machine to come up
	I0924 00:01:33.105550   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.105977   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.106051   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.105911   26982 retry.go:31] will retry after 379.213431ms: waiting for machine to come up
	I0924 00:01:33.486484   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.487004   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.487032   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.486952   26982 retry.go:31] will retry after 425.287824ms: waiting for machine to come up
	I0924 00:01:33.913409   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.913794   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.913822   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.913744   26982 retry.go:31] will retry after 517.327433ms: waiting for machine to come up
	I0924 00:01:34.432365   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:34.432967   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:34.432990   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:34.432933   26982 retry.go:31] will retry after 602.673221ms: waiting for machine to come up
	I0924 00:01:35.036831   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:35.037345   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:35.037375   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:35.037323   26982 retry.go:31] will retry after 797.600229ms: waiting for machine to come up
	I0924 00:01:35.836744   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:35.837147   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:35.837167   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:35.837118   26982 retry.go:31] will retry after 961.577188ms: waiting for machine to come up
	I0924 00:01:36.800289   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:36.800667   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:36.800730   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:36.800639   26982 retry.go:31] will retry after 936.999629ms: waiting for machine to come up
	I0924 00:01:37.740480   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:37.740978   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:37.741002   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:37.740949   26982 retry.go:31] will retry after 1.346163433s: waiting for machine to come up
	I0924 00:01:39.089423   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:39.089867   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:39.089892   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:39.089852   26982 retry.go:31] will retry after 1.874406909s: waiting for machine to come up
	I0924 00:01:40.965400   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:40.965872   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:40.965892   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:40.965827   26982 retry.go:31] will retry after 2.811212351s: waiting for machine to come up
	I0924 00:01:43.780398   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:43.780984   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:43.781006   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:43.780942   26982 retry.go:31] will retry after 2.831259444s: waiting for machine to come up
	I0924 00:01:46.613330   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:46.613716   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:46.613743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:46.613670   26982 retry.go:31] will retry after 4.008768327s: waiting for machine to come up
	I0924 00:01:50.626829   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:50.627309   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:50.627329   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:50.627284   26982 retry.go:31] will retry after 5.442842747s: waiting for machine to come up
	I0924 00:01:56.073321   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.073934   26218 main.go:141] libmachine: (ha-959539-m03) Found IP for machine: 192.168.39.244
	I0924 00:01:56.073959   26218 main.go:141] libmachine: (ha-959539-m03) Reserving static IP address...
	I0924 00:01:56.073972   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has current primary IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.074620   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find host DHCP lease matching {name: "ha-959539-m03", mac: "52:54:00:b3:b3:10", ip: "192.168.39.244"} in network mk-ha-959539
	I0924 00:01:56.148126   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Getting to WaitForSSH function...
	I0924 00:01:56.148154   26218 main.go:141] libmachine: (ha-959539-m03) Reserved static IP address: 192.168.39.244
	I0924 00:01:56.148166   26218 main.go:141] libmachine: (ha-959539-m03) Waiting for SSH to be available...
	I0924 00:01:56.150613   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.150941   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539
	I0924 00:01:56.150968   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find defined IP address of network mk-ha-959539 interface with MAC address 52:54:00:b3:b3:10
	I0924 00:01:56.151093   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH client type: external
	I0924 00:01:56.151120   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa (-rw-------)
	I0924 00:01:56.151154   26218 main.go:141] libmachine: (ha-959539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:01:56.151177   26218 main.go:141] libmachine: (ha-959539-m03) DBG | About to run SSH command:
	I0924 00:01:56.151208   26218 main.go:141] libmachine: (ha-959539-m03) DBG | exit 0
	I0924 00:01:56.154778   26218 main.go:141] libmachine: (ha-959539-m03) DBG | SSH cmd err, output: exit status 255: 
	I0924 00:01:56.154798   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 00:01:56.154804   26218 main.go:141] libmachine: (ha-959539-m03) DBG | command : exit 0
	I0924 00:01:56.154809   26218 main.go:141] libmachine: (ha-959539-m03) DBG | err     : exit status 255
	I0924 00:01:56.154815   26218 main.go:141] libmachine: (ha-959539-m03) DBG | output  : 
	I0924 00:01:59.156489   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Getting to WaitForSSH function...
	I0924 00:01:59.159051   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.159534   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.159562   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.159701   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH client type: external
	I0924 00:01:59.159729   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa (-rw-------)
	I0924 00:01:59.159765   26218 main.go:141] libmachine: (ha-959539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:01:59.159777   26218 main.go:141] libmachine: (ha-959539-m03) DBG | About to run SSH command:
	I0924 00:01:59.159792   26218 main.go:141] libmachine: (ha-959539-m03) DBG | exit 0
	I0924 00:01:59.281025   26218 main.go:141] libmachine: (ha-959539-m03) DBG | SSH cmd err, output: <nil>: 
	I0924 00:01:59.281279   26218 main.go:141] libmachine: (ha-959539-m03) KVM machine creation complete!
	I0924 00:01:59.281741   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:01:59.282322   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:59.282554   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:59.282757   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:01:59.282778   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetState
	I0924 00:01:59.284086   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:01:59.284107   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:01:59.284112   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:01:59.284118   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.286743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.287263   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.287293   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.287431   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.287597   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.287746   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.287899   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.288060   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.288359   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.288379   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:01:59.383651   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:01:59.383678   26218 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:01:59.383688   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.386650   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.387045   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.387065   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.387209   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.387419   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.387618   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.387773   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.387925   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.388113   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.388127   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:01:59.485025   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:01:59.485108   26218 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:01:59.485117   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:01:59.485124   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.485390   26218 buildroot.go:166] provisioning hostname "ha-959539-m03"
	I0924 00:01:59.485417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.485578   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.487705   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.488135   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.488163   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.488390   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.488541   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.488687   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.488842   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.489001   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.489173   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.489184   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539-m03 && echo "ha-959539-m03" | sudo tee /etc/hostname
	I0924 00:01:59.598289   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539-m03
	
	I0924 00:01:59.598334   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.601336   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.601720   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.601752   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.601887   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.602080   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.602282   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.602440   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.602632   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.602835   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.602851   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:01:59.709318   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:01:59.709354   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:01:59.709368   26218 buildroot.go:174] setting up certificates
	I0924 00:01:59.709376   26218 provision.go:84] configureAuth start
	I0924 00:01:59.709384   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.709684   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:01:59.712295   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.712675   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.712707   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.712820   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.715173   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.715598   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.715627   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.715766   26218 provision.go:143] copyHostCerts
	I0924 00:01:59.715804   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:01:59.715840   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:01:59.715850   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:01:59.715947   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:01:59.716026   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:01:59.716046   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:01:59.716054   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:01:59.716080   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:01:59.716129   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:01:59.716149   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:01:59.716156   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:01:59.716181   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:01:59.716234   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539-m03 san=[127.0.0.1 192.168.39.244 ha-959539-m03 localhost minikube]
	I0924 00:02:00.004700   26218 provision.go:177] copyRemoteCerts
	I0924 00:02:00.004758   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:02:00.004780   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.008103   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.008547   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.008578   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.008786   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.008992   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.009141   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.009273   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.090471   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:02:00.090557   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:02:00.113842   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:02:00.113915   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 00:02:00.136379   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:02:00.136447   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:02:00.158911   26218 provision.go:87] duration metric: took 449.525192ms to configureAuth
	I0924 00:02:00.158938   26218 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:02:00.159116   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:00.159181   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.161958   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.162260   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.162300   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.162497   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.162693   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.162991   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.163119   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.163316   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:02:00.163504   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:02:00.163521   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:02:00.384084   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:02:00.384116   26218 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:02:00.384137   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetURL
	I0924 00:02:00.385753   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using libvirt version 6000000
	I0924 00:02:00.388406   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.388802   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.388830   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.388972   26218 main.go:141] libmachine: Docker is up and running!
	I0924 00:02:00.389000   26218 main.go:141] libmachine: Reticulating splines...
	I0924 00:02:00.389008   26218 client.go:171] duration metric: took 29.320240775s to LocalClient.Create
	I0924 00:02:00.389034   26218 start.go:167] duration metric: took 29.320301121s to libmachine.API.Create "ha-959539"
	I0924 00:02:00.389045   26218 start.go:293] postStartSetup for "ha-959539-m03" (driver="kvm2")
	I0924 00:02:00.389059   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:02:00.389086   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.389316   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:02:00.389337   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.391543   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.391908   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.391935   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.392055   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.392242   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.392417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.392594   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.471592   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:02:00.475678   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:02:00.475711   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:02:00.475777   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:02:00.475847   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:02:00.475857   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:02:00.475939   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:02:00.485700   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:02:00.510312   26218 start.go:296] duration metric: took 121.25155ms for postStartSetup
	I0924 00:02:00.510378   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:02:00.511011   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:00.513590   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.513900   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.513916   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.514236   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:02:00.514445   26218 start.go:128] duration metric: took 29.464359711s to createHost
	I0924 00:02:00.514478   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.517098   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.517491   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.517528   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.517742   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.517933   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.518100   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.518211   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.518412   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:02:00.518622   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:02:00.518636   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:02:00.621293   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136120.603612543
	
	I0924 00:02:00.621339   26218 fix.go:216] guest clock: 1727136120.603612543
	I0924 00:02:00.621351   26218 fix.go:229] Guest: 2024-09-24 00:02:00.603612543 +0000 UTC Remote: 2024-09-24 00:02:00.514464327 +0000 UTC m=+153.742409876 (delta=89.148216ms)
	I0924 00:02:00.621377   26218 fix.go:200] guest clock delta is within tolerance: 89.148216ms
	I0924 00:02:00.621387   26218 start.go:83] releasing machines lock for "ha-959539-m03", held for 29.571423777s
	I0924 00:02:00.621417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.621673   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:00.624743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.625239   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.625273   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.627860   26218 out.go:177] * Found network options:
	I0924 00:02:00.629759   26218 out.go:177]   - NO_PROXY=192.168.39.231,192.168.39.71
	W0924 00:02:00.631173   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 00:02:00.631197   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:02:00.631218   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.631908   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.632117   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.632197   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:02:00.632234   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	W0924 00:02:00.632352   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 00:02:00.632378   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:02:00.632447   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:02:00.632470   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.635213   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635463   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635655   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.635679   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635817   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.635945   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.635972   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635973   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.636112   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.636177   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.636243   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.636375   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.636384   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.636482   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.872674   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:02:00.879244   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:02:00.879303   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:02:00.896008   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:02:00.896041   26218 start.go:495] detecting cgroup driver to use...
	I0924 00:02:00.896119   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:02:00.912126   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:02:00.928181   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:02:00.928242   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:02:00.942640   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:02:00.957462   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:02:01.095902   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:02:01.244902   26218 docker.go:233] disabling docker service ...
	I0924 00:02:01.244972   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:02:01.260549   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:02:01.273803   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:02:01.412634   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:02:01.527287   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:02:01.541205   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:02:01.559624   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:02:01.559693   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.569832   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:02:01.569892   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.580172   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.590239   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.600013   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:02:01.610683   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.622051   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.639348   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.649043   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:02:01.659584   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:02:01.659633   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:02:01.673533   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:02:01.683341   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:01.799476   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:02:01.894369   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:02:01.894448   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:02:01.898980   26218 start.go:563] Will wait 60s for crictl version
	I0924 00:02:01.899028   26218 ssh_runner.go:195] Run: which crictl
	I0924 00:02:01.902610   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:02:01.942080   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:02:01.942167   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:02:01.973094   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:02:02.006636   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:02:02.008088   26218 out.go:177]   - env NO_PROXY=192.168.39.231
	I0924 00:02:02.009670   26218 out.go:177]   - env NO_PROXY=192.168.39.231,192.168.39.71
	I0924 00:02:02.011150   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:02.014303   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:02.014787   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:02.014816   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:02.015031   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:02:02.019245   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:02:02.031619   26218 mustload.go:65] Loading cluster: ha-959539
	I0924 00:02:02.031867   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:02.032216   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:02.032262   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:02.047774   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0924 00:02:02.048245   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:02.048817   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:02.048840   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:02.049178   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:02.049404   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:02:02.051028   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:02:02.051346   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:02.051384   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:02.067177   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0924 00:02:02.067626   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:02.068120   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:02.068147   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:02.068561   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:02.068767   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:02:02.069023   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.244
	I0924 00:02:02.069035   26218 certs.go:194] generating shared ca certs ...
	I0924 00:02:02.069051   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.069225   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:02:02.069324   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:02:02.069337   26218 certs.go:256] generating profile certs ...
	I0924 00:02:02.069432   26218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:02:02.069461   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e
	I0924 00:02:02.069482   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.244 192.168.39.254]
	I0924 00:02:02.200792   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e ...
	I0924 00:02:02.200824   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e: {Name:mk0815e5ce107bafe277776d87408434b1fc0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.200990   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e ...
	I0924 00:02:02.201002   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e: {Name:mk2b87933cd0413159c4371c2a1af112dc0ae1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.201076   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:02:02.201200   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:02:02.201326   26218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:02:02.201341   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:02:02.201362   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:02:02.201373   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:02:02.201386   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:02:02.201398   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:02:02.201412   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:02:02.201424   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:02:02.216460   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:02:02.216561   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:02:02.216595   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:02:02.216607   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:02:02.216644   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:02:02.216668   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:02:02.216690   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:02:02.216728   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:02:02.216755   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.216774   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.216787   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.216818   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:02:02.220023   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:02.220522   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:02:02.220546   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:02.220674   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:02:02.220912   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:02:02.221115   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:02:02.221280   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:02:02.300781   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 00:02:02.306919   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 00:02:02.318700   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 00:02:02.322783   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0924 00:02:02.333789   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 00:02:02.337697   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 00:02:02.347574   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 00:02:02.351556   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 00:02:02.362821   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 00:02:02.367302   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 00:02:02.379143   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 00:02:02.383718   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 00:02:02.395777   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:02:02.422519   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:02:02.448222   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:02:02.473922   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:02:02.496975   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0924 00:02:02.519778   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:02:02.544839   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:02:02.567771   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:02:02.594776   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:02:02.622998   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:02:02.646945   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:02:02.670094   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 00:02:02.688636   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0924 00:02:02.706041   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 00:02:02.723591   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 00:02:02.740289   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 00:02:02.757088   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 00:02:02.774564   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 00:02:02.791730   26218 ssh_runner.go:195] Run: openssl version
	I0924 00:02:02.797731   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:02:02.810316   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.815033   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.815102   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.820784   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:02:02.831910   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:02:02.842883   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.847291   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.847354   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.852958   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:02:02.863626   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:02:02.874113   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.878537   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.878606   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.884346   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:02:02.896403   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:02:02.900556   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:02:02.900623   26218 kubeadm.go:934] updating node {m03 192.168.39.244 8443 v1.31.1 crio true true} ...
	I0924 00:02:02.900726   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:02:02.900760   26218 kube-vip.go:115] generating kube-vip config ...
	I0924 00:02:02.900809   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:02:02.915515   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:02:02.915610   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:02:02.915676   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:02:02.926273   26218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 00:02:02.926342   26218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 00:02:02.935889   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0924 00:02:02.935892   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0924 00:02:02.935939   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 00:02:02.935957   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:02:02.935965   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:02:02.935958   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:02:02.936030   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:02:02.936043   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:02:02.951235   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:02:02.951306   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 00:02:02.951337   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 00:02:02.951357   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:02:02.951363   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 00:02:02.951385   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 00:02:02.982567   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 00:02:02.982613   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 00:02:03.832975   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 00:02:03.844045   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 00:02:03.862702   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:02:03.880776   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:02:03.898729   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:02:03.902596   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:02:03.914924   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:04.053085   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:02:04.070074   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:02:04.070579   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:04.070643   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:04.087474   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0924 00:02:04.087999   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:04.088599   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:04.088620   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:04.089029   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:04.089257   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:02:04.089416   26218 start.go:317] joinCluster: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:02:04.089542   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 00:02:04.089559   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:02:04.092876   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:04.093495   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:02:04.093522   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:04.093697   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:02:04.093959   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:02:04.094120   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:02:04.094269   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:02:04.268135   26218 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:02:04.268198   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4ctl0.w5qwixeo1tvb3095 --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443"
	I0924 00:02:27.863528   26218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4ctl0.w5qwixeo1tvb3095 --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443": (23.595296768s)
	I0924 00:02:27.863572   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 00:02:28.487060   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539-m03 minikube.k8s.io/updated_at=2024_09_24T00_02_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=false
	I0924 00:02:28.628940   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-959539-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 00:02:28.748648   26218 start.go:319] duration metric: took 24.659226615s to joinCluster
	I0924 00:02:28.748728   26218 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:02:28.749108   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:28.750104   26218 out.go:177] * Verifying Kubernetes components...
	I0924 00:02:28.751646   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:29.019967   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:02:29.061460   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:02:29.061682   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 00:02:29.061736   26218 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.231:8443
	I0924 00:02:29.061979   26218 node_ready.go:35] waiting up to 6m0s for node "ha-959539-m03" to be "Ready" ...
	I0924 00:02:29.062051   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:29.062060   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:29.062068   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:29.062074   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:29.066072   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:29.562533   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:29.562554   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:29.562560   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:29.562570   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:29.567739   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:30.062212   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:30.062237   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:30.062245   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:30.062250   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:30.065711   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:30.562367   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:30.562402   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:30.562414   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:30.562419   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:30.565510   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:31.062523   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:31.062552   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:31.062564   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:31.062571   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:31.066499   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:31.067388   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:31.562731   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:31.562756   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:31.562771   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:31.562776   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:31.566512   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:32.062420   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:32.062441   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:32.062449   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:32.062454   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:32.065609   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:32.563014   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:32.563034   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:32.563042   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:32.563047   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:32.566443   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:33.062951   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:33.062980   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:33.062991   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:33.062996   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:33.067213   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:33.067831   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:33.562180   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:33.562210   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:33.562222   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:33.562229   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:33.565119   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:34.062360   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:34.062379   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:34.062387   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:34.062394   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:34.065867   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:34.562470   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:34.562494   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:34.562503   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:34.562508   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:34.566075   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:35.063097   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:35.063122   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:35.063133   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:35.063139   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:35.067536   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:35.068167   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:35.563171   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:35.563192   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:35.563200   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:35.563204   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:35.566347   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:36.062231   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:36.062252   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:36.062259   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:36.062263   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:36.068635   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:36.562318   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:36.562352   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:36.562360   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:36.562366   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:36.565945   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.062441   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:37.062465   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:37.062473   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:37.062477   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:37.065788   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.562611   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:37.562633   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:37.562641   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:37.562646   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:37.565850   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.566272   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:38.062661   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:38.062683   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:38.062691   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:38.062696   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:38.066483   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:38.562638   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:38.562660   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:38.562667   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:38.562671   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:38.566169   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.062729   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:39.062750   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:39.062759   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:39.062763   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:39.066557   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.562877   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:39.562899   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:39.562907   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:39.562912   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:39.566233   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.566763   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:40.063206   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:40.063226   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:40.063234   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:40.063239   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:40.066817   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:40.562132   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:40.562155   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:40.562165   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:40.562173   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:40.565811   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:41.062663   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:41.062683   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:41.062692   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:41.062696   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:41.066042   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:41.563040   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:41.563066   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:41.563078   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:41.563084   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:41.566187   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:42.063050   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:42.063071   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:42.063079   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:42.063082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:42.066449   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:42.067262   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:42.563040   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:42.563066   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:42.563077   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:42.563082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:42.566476   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:43.062431   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:43.062452   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:43.062458   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:43.062461   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:43.065607   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:43.563123   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:43.563144   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:43.563152   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:43.563155   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:43.566312   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.062448   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:44.062472   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:44.062480   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:44.062484   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:44.065777   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.562484   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:44.562506   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:44.562518   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:44.562527   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:44.565803   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.566407   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:45.062747   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.062780   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.062787   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.062792   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.066101   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.562696   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.562717   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.562726   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.562732   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.566877   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:45.567306   26218 node_ready.go:49] node "ha-959539-m03" has status "Ready":"True"
	I0924 00:02:45.567324   26218 node_ready.go:38] duration metric: took 16.505330859s for node "ha-959539-m03" to be "Ready" ...
	I0924 00:02:45.567334   26218 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:02:45.567399   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:45.567411   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.567421   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.567435   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.576236   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:02:45.582315   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.582415   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nkbzw
	I0924 00:02:45.582426   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.582437   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.582444   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.586563   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:45.587529   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.587551   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.587561   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.587566   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.590549   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.591073   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.591094   26218 pod_ready.go:82] duration metric: took 8.751789ms for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.591106   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.591177   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ss8lg
	I0924 00:02:45.591186   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.591196   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.591204   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.594507   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.595092   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.595107   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.595115   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.595119   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.597906   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.598405   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.598421   26218 pod_ready.go:82] duration metric: took 7.307084ms for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.598432   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.598497   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539
	I0924 00:02:45.598508   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.598517   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.598534   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.601102   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.601629   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.601643   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.601652   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.601657   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.604411   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.604921   26218 pod_ready.go:93] pod "etcd-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.604936   26218 pod_ready.go:82] duration metric: took 6.498124ms for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.604943   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.604986   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m02
	I0924 00:02:45.604994   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.605000   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.605003   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.607711   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.608182   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:45.608195   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.608202   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.608205   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.611102   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.611468   26218 pod_ready.go:93] pod "etcd-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.611482   26218 pod_ready.go:82] duration metric: took 6.534228ms for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.611489   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.762986   26218 request.go:632] Waited for 151.426917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m03
	I0924 00:02:45.763060   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m03
	I0924 00:02:45.763072   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.763082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.763093   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.768790   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:45.963102   26218 request.go:632] Waited for 193.344337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.963164   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.963169   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.963175   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.963178   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.966765   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.967332   26218 pod_ready.go:93] pod "etcd-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.967348   26218 pod_ready.go:82] duration metric: took 355.853201ms for pod "etcd-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.967370   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.162735   26218 request.go:632] Waited for 195.29099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:02:46.162798   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:02:46.162806   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.162816   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.162825   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.166290   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.363412   26218 request.go:632] Waited for 196.338649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:46.363479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:46.363488   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.363500   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.363522   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.368828   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:46.369452   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:46.369475   26218 pod_ready.go:82] duration metric: took 402.09395ms for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.369488   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.563510   26218 request.go:632] Waited for 193.954572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:02:46.563593   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:02:46.563601   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.563612   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.563620   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.567229   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.763581   26218 request.go:632] Waited for 195.391711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:46.763651   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:46.763658   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.763669   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.763676   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.766915   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.767439   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:46.767461   26218 pod_ready.go:82] duration metric: took 397.964383ms for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.767475   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.963610   26218 request.go:632] Waited for 196.063114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m03
	I0924 00:02:46.963694   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m03
	I0924 00:02:46.963703   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.963712   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.963719   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.967275   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.162752   26218 request.go:632] Waited for 194.876064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:47.162830   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:47.162838   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.162844   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.162847   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.166156   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.166699   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.166716   26218 pod_ready.go:82] duration metric: took 399.234813ms for pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.166725   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.362729   26218 request.go:632] Waited for 195.941337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:02:47.362789   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:02:47.362795   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.362802   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.362806   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.365942   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.562904   26218 request.go:632] Waited for 196.303098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:47.562966   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:47.562973   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.562982   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.562987   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.566192   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.566827   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.566845   26218 pod_ready.go:82] duration metric: took 400.114045ms for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.566855   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.762958   26218 request.go:632] Waited for 196.048732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:02:47.763034   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:02:47.763042   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.763049   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.763058   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.766336   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.963363   26218 request.go:632] Waited for 196.287822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:47.963455   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:47.963462   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.963470   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.963474   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.967146   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.967827   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.967850   26218 pod_ready.go:82] duration metric: took 400.989142ms for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.967860   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.162800   26218 request.go:632] Waited for 194.858732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m03
	I0924 00:02:48.162862   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m03
	I0924 00:02:48.162869   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.162880   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.162886   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.166955   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:48.362915   26218 request.go:632] Waited for 195.291486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:48.363004   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:48.363015   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.363023   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.363027   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.366536   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:48.367263   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:48.367282   26218 pod_ready.go:82] duration metric: took 399.415546ms for pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.367292   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.563765   26218 request.go:632] Waited for 196.416841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:02:48.563839   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:02:48.563844   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.563852   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.563858   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.567525   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:48.763756   26218 request.go:632] Waited for 195.286657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:48.763808   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:48.763813   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.763823   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.763827   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.768008   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:48.768461   26218 pod_ready.go:93] pod "kube-proxy-2hlqx" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:48.768523   26218 pod_ready.go:82] duration metric: took 401.181266ms for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.768542   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b82ch" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.963586   26218 request.go:632] Waited for 194.968745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b82ch
	I0924 00:02:48.963672   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b82ch
	I0924 00:02:48.963682   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.963698   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.963706   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.967156   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.163098   26218 request.go:632] Waited for 195.427645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:49.163160   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:49.163165   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.163172   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.163175   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.168664   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:49.169191   26218 pod_ready.go:93] pod "kube-proxy-b82ch" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.169212   26218 pod_ready.go:82] duration metric: took 400.661599ms for pod "kube-proxy-b82ch" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.169224   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.363274   26218 request.go:632] Waited for 193.975466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:02:49.363332   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:02:49.363337   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.363345   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.363348   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.367061   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.563180   26218 request.go:632] Waited for 195.372048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.563241   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.563246   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.563253   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.563260   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.566761   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.567465   26218 pod_ready.go:93] pod "kube-proxy-qzklc" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.567481   26218 pod_ready.go:82] duration metric: took 398.249897ms for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.567490   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.763615   26218 request.go:632] Waited for 196.0486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:02:49.763668   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:02:49.763673   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.763681   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.763685   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.767108   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.963188   26218 request.go:632] Waited for 195.362713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.963255   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.963261   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.963268   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.963273   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.966872   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.967707   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.967726   26218 pod_ready.go:82] duration metric: took 400.230299ms for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.967774   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.163358   26218 request.go:632] Waited for 195.519311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:02:50.163411   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:02:50.163416   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.163424   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.163428   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.167399   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.363362   26218 request.go:632] Waited for 195.429658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:50.363431   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:50.363438   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.363448   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.363453   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.366812   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.367292   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:50.367315   26218 pod_ready.go:82] duration metric: took 399.528577ms for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.367328   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.563431   26218 request.go:632] Waited for 196.035117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m03
	I0924 00:02:50.563517   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m03
	I0924 00:02:50.563525   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.563533   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.563536   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.567039   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.763077   26218 request.go:632] Waited for 195.355137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:50.763142   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:50.763148   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.763155   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.763160   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.766779   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.767385   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:50.767402   26218 pod_ready.go:82] duration metric: took 400.066903ms for pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.767413   26218 pod_ready.go:39] duration metric: took 5.200066315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:02:50.767425   26218 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:02:50.767482   26218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:02:50.783606   26218 api_server.go:72] duration metric: took 22.034845457s to wait for apiserver process to appear ...
	I0924 00:02:50.783631   26218 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:02:50.783650   26218 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0924 00:02:50.788103   26218 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0924 00:02:50.788220   26218 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I0924 00:02:50.788231   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.788241   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.788247   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.789134   26218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 00:02:50.789199   26218 api_server.go:141] control plane version: v1.31.1
	I0924 00:02:50.789217   26218 api_server.go:131] duration metric: took 5.578933ms to wait for apiserver health ...
	I0924 00:02:50.789227   26218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:02:50.963536   26218 request.go:632] Waited for 174.232731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:50.963617   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:50.963624   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.963635   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.963649   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.969906   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:50.976880   26218 system_pods.go:59] 24 kube-system pods found
	I0924 00:02:50.976914   26218 system_pods.go:61] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:02:50.976919   26218 system_pods.go:61] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:02:50.976923   26218 system_pods.go:61] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:02:50.976928   26218 system_pods.go:61] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:02:50.976933   26218 system_pods.go:61] "etcd-ha-959539-m03" [a71adb46-5bbc-43ce-8ef0-2b03bf75da03] Running
	I0924 00:02:50.976938   26218 system_pods.go:61] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:02:50.976943   26218 system_pods.go:61] "kindnet-g4nkw" [32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f] Running
	I0924 00:02:50.976948   26218 system_pods.go:61] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:02:50.976953   26218 system_pods.go:61] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:02:50.976958   26218 system_pods.go:61] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:02:50.976968   26218 system_pods.go:61] "kube-apiserver-ha-959539-m03" [7a54eb39-3ff9-4eb8-a5df-4333e1416899] Running
	I0924 00:02:50.976977   26218 system_pods.go:61] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:02:50.976985   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:02:50.976991   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m03" [bc95be18-c320-4981-8155-18432f08883e] Running
	I0924 00:02:50.976999   26218 system_pods.go:61] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:02:50.977007   26218 system_pods.go:61] "kube-proxy-b82ch" [5bf376fc-8dbe-4817-874c-506f5dc4d2e7] Running
	I0924 00:02:50.977015   26218 system_pods.go:61] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:02:50.977020   26218 system_pods.go:61] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:02:50.977027   26218 system_pods.go:61] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:02:50.977031   26218 system_pods.go:61] "kube-scheduler-ha-959539-m03" [e39eb1d7-90f3-4af9-9356-45ae9c23828d] Running
	I0924 00:02:50.977036   26218 system_pods.go:61] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:02:50.977044   26218 system_pods.go:61] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:02:50.977049   26218 system_pods.go:61] "kube-vip-ha-959539-m03" [3c5fd7f2-aec4-42d8-9331-ba59a4d76539] Running
	I0924 00:02:50.977058   26218 system_pods.go:61] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:02:50.977069   26218 system_pods.go:74] duration metric: took 187.832664ms to wait for pod list to return data ...
	I0924 00:02:50.977080   26218 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:02:51.162900   26218 request.go:632] Waited for 185.733558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:02:51.162976   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:02:51.162988   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.162995   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.163003   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.166765   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:51.166900   26218 default_sa.go:45] found service account: "default"
	I0924 00:02:51.166916   26218 default_sa.go:55] duration metric: took 189.8293ms for default service account to be created ...
	I0924 00:02:51.166927   26218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:02:51.363374   26218 request.go:632] Waited for 196.378603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:51.363436   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:51.363443   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.363453   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.363458   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.370348   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:51.376926   26218 system_pods.go:86] 24 kube-system pods found
	I0924 00:02:51.376957   26218 system_pods.go:89] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:02:51.376966   26218 system_pods.go:89] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:02:51.376972   26218 system_pods.go:89] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:02:51.376977   26218 system_pods.go:89] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:02:51.376984   26218 system_pods.go:89] "etcd-ha-959539-m03" [a71adb46-5bbc-43ce-8ef0-2b03bf75da03] Running
	I0924 00:02:51.376989   26218 system_pods.go:89] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:02:51.376994   26218 system_pods.go:89] "kindnet-g4nkw" [32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f] Running
	I0924 00:02:51.377000   26218 system_pods.go:89] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:02:51.377006   26218 system_pods.go:89] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:02:51.377012   26218 system_pods.go:89] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:02:51.377018   26218 system_pods.go:89] "kube-apiserver-ha-959539-m03" [7a54eb39-3ff9-4eb8-a5df-4333e1416899] Running
	I0924 00:02:51.377026   26218 system_pods.go:89] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:02:51.377036   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:02:51.377042   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m03" [bc95be18-c320-4981-8155-18432f08883e] Running
	I0924 00:02:51.377051   26218 system_pods.go:89] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:02:51.377057   26218 system_pods.go:89] "kube-proxy-b82ch" [5bf376fc-8dbe-4817-874c-506f5dc4d2e7] Running
	I0924 00:02:51.377066   26218 system_pods.go:89] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:02:51.377072   26218 system_pods.go:89] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:02:51.377080   26218 system_pods.go:89] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:02:51.377086   26218 system_pods.go:89] "kube-scheduler-ha-959539-m03" [e39eb1d7-90f3-4af9-9356-45ae9c23828d] Running
	I0924 00:02:51.377094   26218 system_pods.go:89] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:02:51.377100   26218 system_pods.go:89] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:02:51.377105   26218 system_pods.go:89] "kube-vip-ha-959539-m03" [3c5fd7f2-aec4-42d8-9331-ba59a4d76539] Running
	I0924 00:02:51.377111   26218 system_pods.go:89] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:02:51.377123   26218 system_pods.go:126] duration metric: took 210.186327ms to wait for k8s-apps to be running ...
	I0924 00:02:51.377134   26218 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:02:51.377189   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:02:51.392588   26218 system_svc.go:56] duration metric: took 15.444721ms WaitForService to wait for kubelet
	I0924 00:02:51.392618   26218 kubeadm.go:582] duration metric: took 22.64385975s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:02:51.392638   26218 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:02:51.563072   26218 request.go:632] Waited for 170.361096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I0924 00:02:51.563121   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I0924 00:02:51.563126   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.563134   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.563139   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.567517   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:51.569246   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569269   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569282   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569287   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569293   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569298   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569305   26218 node_conditions.go:105] duration metric: took 176.660035ms to run NodePressure ...
	I0924 00:02:51.569328   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:02:51.569355   26218 start.go:255] writing updated cluster config ...
	I0924 00:02:51.569656   26218 ssh_runner.go:195] Run: rm -f paused
	I0924 00:02:51.621645   26218 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 00:02:51.623613   26218 out.go:177] * Done! kubectl is now configured to use "ha-959539" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.091569687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136393091544056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1bed51c-21f6-42fc-beea-956226dcab9e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.092063677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e7f548e-3869-483b-908a-e29664bf83ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.092120642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e7f548e-3869-483b-908a-e29664bf83ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.092392853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e7f548e-3869-483b-908a-e29664bf83ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.131400290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73f7c8b9-e015-45b1-a171-05da0b8ec694 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.131496723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73f7c8b9-e015-45b1-a171-05da0b8ec694 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.132767420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b165775-4aa8-4551-aa74-c8fe47ef5d72 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.133180864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136393133158749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b165775-4aa8-4551-aa74-c8fe47ef5d72 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.134044212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd9deff1-7b35-43cd-af59-a0886aa26f51 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.134101433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd9deff1-7b35-43cd-af59-a0886aa26f51 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.134410251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd9deff1-7b35-43cd-af59-a0886aa26f51 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.177418712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f2571ae-4dee-47ae-9a43-e6a1281698aa name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.177533188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f2571ae-4dee-47ae-9a43-e6a1281698aa name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.178775983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42456bb5-fae9-4e21-acbf-9c5cd104f6dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.179180132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136393179159682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42456bb5-fae9-4e21-acbf-9c5cd104f6dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.179748261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5110fa1c-df86-4ae3-b1a4-7c0f0f7f13ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.179801320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5110fa1c-df86-4ae3-b1a4-7c0f0f7f13ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.180023091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5110fa1c-df86-4ae3-b1a4-7c0f0f7f13ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.216867810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bccdf9c0-5de5-4952-8587-e32a7b73bda7 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.216939834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bccdf9c0-5de5-4952-8587-e32a7b73bda7 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.218089050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70a0cd0e-ae51-4309-b380-f1dea08680f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.218611711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136393218586232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70a0cd0e-ae51-4309-b380-f1dea08680f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.219163388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3904088-d899-48d5-a1bc-43205f88ec1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.219219458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3904088-d899-48d5-a1bc-43205f88ec1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:33 ha-959539 crio[665]: time="2024-09-24 00:06:33.220147513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3904088-d899-48d5-a1bc-43205f88ec1a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ae8646f943f6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4b5dbf2a21893       busybox-7dff88458-7q7xr
	05d43a4d13300       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a91a16106518a       coredns-7c65d6cfc9-nkbzw
	e7a1a19a83d49       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   1a4ee0160fc1d       coredns-7c65d6cfc9-ss8lg
	2eb114bb7775d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2ffb51384d9a5       storage-provisioner
	1596300e66cf2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   1a380d0471083       kindnet-qlqss
	cdf912809c47a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   72ade1a051045       kube-proxy-qzklc
	b61587cd3ccea       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   f6a8ccad216f1       kube-vip-ha-959539
	d5459f3bc533d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   40d143641822b       etcd-ha-959539
	af224d12661c4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   7328f59cdb993       kube-scheduler-ha-959539
	a42356ed739fd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c7d97a67f80f6       kube-controller-manager-ha-959539
	8c911375acec9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   7cdc58cf999c2       kube-apiserver-ha-959539
	
	
	==> coredns [05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137] <==
	[INFO] 10.244.0.4:50134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005141674s
	[INFO] 10.244.1.2:43867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223991s
	[INFO] 10.244.1.2:35996 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000101615s
	[INFO] 10.244.2.2:54425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224645s
	[INFO] 10.244.2.2:58169 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.00170508s
	[INFO] 10.244.0.4:55776 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107033s
	[INFO] 10.244.0.4:58501 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017716872s
	[INFO] 10.244.0.4:37973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002021s
	[INFO] 10.244.0.4:43904 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156858s
	[INFO] 10.244.0.4:48352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163626s
	[INFO] 10.244.1.2:52896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132298s
	[INFO] 10.244.1.2:45449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227639s
	[INFO] 10.244.1.2:47616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017286s
	[INFO] 10.244.1.2:33521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108761s
	[INFO] 10.244.1.2:43587 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012987s
	[INFO] 10.244.2.2:52394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001362s
	[INFO] 10.244.2.2:43819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119859s
	[INFO] 10.244.2.2:35291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097457s
	[INFO] 10.244.2.2:56966 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168721s
	[INFO] 10.244.0.4:52779 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102739s
	[INFO] 10.244.2.2:59382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262295s
	[INFO] 10.244.2.2:44447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133384s
	[INFO] 10.244.2.2:52951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170462s
	[INFO] 10.244.2.2:46956 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215226s
	[INFO] 10.244.2.2:53703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108727s
	
	
	==> coredns [e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0] <==
	[INFO] 10.244.1.2:36104 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002245521s
	[INFO] 10.244.1.2:41962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001624615s
	[INFO] 10.244.1.2:36352 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142132s
	[INFO] 10.244.2.2:54238 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001909893s
	[INFO] 10.244.2.2:38238 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165226s
	[INFO] 10.244.2.2:40250 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173003s
	[INFO] 10.244.2.2:53405 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126728s
	[INFO] 10.244.0.4:46344 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157852s
	[INFO] 10.244.0.4:57359 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065958s
	[INFO] 10.244.0.4:43743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119977s
	[INFO] 10.244.1.2:32867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192169s
	[INFO] 10.244.1.2:43403 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167697s
	[INFO] 10.244.1.2:57243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095722s
	[INFO] 10.244.1.2:48326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119715s
	[INFO] 10.244.2.2:49664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122596s
	[INFO] 10.244.2.2:40943 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106169s
	[INFO] 10.244.0.4:36066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121758s
	[INFO] 10.244.0.4:51023 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156225s
	[INFO] 10.244.0.4:56715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125631s
	[INFO] 10.244.0.4:47944 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103261s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148466s
	[INFO] 10.244.1.2:54979 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116145s
	[INFO] 10.244.1.2:47442 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097064s
	[INFO] 10.244.1.2:38143 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188037s
	[INFO] 10.244.2.2:40107 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086602s
	
	
	==> describe nodes <==
	Name:               ha-959539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:00:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-959539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a4b9ce5eed94a13bdbc682549e1dd1e
	  System UUID:                0a4b9ce5-eed9-4a13-bdbc-682549e1dd1e
	  Boot ID:                    679e0a2b-8772-4f6d-9e47-ba8190727387
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7q7xr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-nkbzw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 coredns-7c65d6cfc9-ss8lg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 etcd-ha-959539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-qlqss                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m20s
	  kube-system                 kube-apiserver-ha-959539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-959539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-qzklc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-scheduler-ha-959539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-959539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  RegisteredNode           6m21s  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-959539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-959539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-959539 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m8s   kubelet          Node ha-959539 status is now: NodeReady
	  Normal  RegisteredNode           5m20s  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	
	
	Name:               ha-959539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:01:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:04:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    ha-959539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f78cfc70aad42d195f1884fe3a82e21
	  System UUID:                0f78cfc7-0aad-42d1-95f1-884fe3a82e21
	  Boot ID:                    247da00b-9587-4de7-aa45-9671f65dd14e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m5qhr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-959539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-cbrj7                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-959539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-controller-manager-ha-959539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-2hlqx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-959539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-vip-ha-959539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m24s                  kube-proxy       
	  Normal  Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m29s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m29s)  kubelet          Node ha-959539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m29s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-959539-m02 status is now: NodeNotReady
	
	
	Name:               ha-959539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_02_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-959539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e393f2c1cce4055aaf3b67371deff0b
	  System UUID:                7e393f2c-1cce-4055-aaf3-b67371deff0b
	  Boot ID:                    d3fa2681-c8c7-4049-92ed-f71eeaa56616
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9v6l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-959539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-g4nkw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-959539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-959539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-proxy-b82ch                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-959539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-vip-ha-959539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-959539-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	
	
	Name:               ha-959539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_03_32_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:03:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-959539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55d6e549bf6d4455bd4db681e2cc17b8
	  System UUID:                55d6e549-bf6d-4455-bd4d-b681e2cc17b8
	  Boot ID:                    0f7b628e-f628-48c1-aab1-6401b3cfb87c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54xw8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-8h8qr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-959539-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-959539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 23:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051430] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.729802] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.844348] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.545165] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.336873] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.055717] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062835] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.175047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141488] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281309] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.886660] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[Sep24 00:00] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.061155] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.064379] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.136832] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +2.892614] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.264409] kauditd_printk_skb: 15 callbacks suppressed
	[Sep24 00:01] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2] <==
	{"level":"warn","ts":"2024-09-24T00:06:33.446063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.484867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.500907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.511091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.516592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.531291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.540163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.546478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.548545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.553113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.556892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.564406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.571165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.577655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.581973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.585609Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.592694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.599101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.605723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.613504Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.616719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.620180Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.628267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.636301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:33.646524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:06:33 up 7 min,  0 users,  load average: 0.54, 0.26, 0.11
	Linux ha-959539 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2] <==
	I0924 00:05:55.421232       1 main.go:299] handling current node
	I0924 00:06:05.421885       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:05.422086       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:05.422284       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:05.422309       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:05.422434       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:05.422460       1 main.go:299] handling current node
	I0924 00:06:05.422505       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:05.422531       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:15.413375       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:15.413431       1 main.go:299] handling current node
	I0924 00:06:15.413451       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:15.413457       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:15.413644       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:15.413665       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:15.413709       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:15.413714       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:25.420493       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:25.420595       1 main.go:299] handling current node
	I0924 00:06:25.420622       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:25.420640       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:25.420821       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:25.420897       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:25.420983       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:25.421005       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288] <==
	I0924 00:00:07.916652       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 00:00:12.613775       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 00:00:12.673306       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 00:00:12.714278       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 00:00:13.518109       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 00:00:13.589977       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 00:02:25.922866       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="9c890d06-5a2f-40bc-b52e-84153e1ff033"
	E0924 00:02:25.923053       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.218µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0924 00:02:25.923547       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 800.044µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0924 00:02:57.928651       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42468: use of closed network connection
	E0924 00:02:58.108585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42478: use of closed network connection
	E0924 00:02:58.286933       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42500: use of closed network connection
	E0924 00:02:58.488672       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42526: use of closed network connection
	E0924 00:02:58.667114       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42542: use of closed network connection
	E0924 00:02:58.850942       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42560: use of closed network connection
	E0924 00:02:59.040828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42576: use of closed network connection
	E0924 00:02:59.220980       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42590: use of closed network connection
	E0924 00:02:59.394600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42608: use of closed network connection
	E0924 00:02:59.676143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42636: use of closed network connection
	E0924 00:02:59.860764       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42646: use of closed network connection
	E0924 00:03:00.047956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42676: use of closed network connection
	E0924 00:03:00.214607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42700: use of closed network connection
	E0924 00:03:00.390729       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42708: use of closed network connection
	E0924 00:03:00.581800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42734: use of closed network connection
	W0924 00:04:17.715664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.244]
	
	
	==> kube-controller-manager [a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974] <==
	I0924 00:03:31.919493       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-959539-m04" podCIDRs=["10.244.3.0/24"]
	I0924 00:03:31.919545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:31.919581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:31.939956       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:32.140223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:32.547615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.004678       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-959539-m04"
	I0924 00:03:33.023454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.163542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.196770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.276017       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.293134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:42.271059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:52.595797       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	I0924 00:03:52.595900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:52.614607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:53.023412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:04:02.710901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:04:48.048138       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	I0924 00:04:48.048400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:48.078576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:48.166696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.971716ms"
	I0924 00:04:48.166889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.521µs"
	I0924 00:04:48.406838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:53.246642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	
	
	==> kube-proxy [cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:00:14.873543       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:00:14.915849       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	E0924 00:00:14.916021       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:00:14.966031       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:00:14.966075       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:00:14.966099       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:00:14.979823       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:00:14.980813       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:00:14.980842       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:00:14.989078       1 config.go:199] "Starting service config controller"
	I0924 00:00:14.990228       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:00:14.990251       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:00:14.993409       1 config.go:328] "Starting node config controller"
	I0924 00:00:14.993460       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:00:14.993657       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:00:15.090975       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:00:15.094378       1 shared_informer.go:320] Caches are synced for node config
	I0924 00:00:15.094379       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd] <==
	E0924 00:00:07.294311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:00:07.525201       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:00:07.525260       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 00:00:10.263814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 00:02:25.214912       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-g4nkw\": pod kindnet-g4nkw is already assigned to node \"ha-959539-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-g4nkw" node="ha-959539-m03"
	E0924 00:02:25.215083       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-g4nkw\": pod kindnet-g4nkw is already assigned to node \"ha-959539-m03\"" pod="kube-system/kindnet-g4nkw"
	E0924 00:02:25.219021       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-b82ch\": pod kube-proxy-b82ch is already assigned to node \"ha-959539-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-b82ch" node="ha-959539-m03"
	E0924 00:02:25.222512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5bf376fc-8dbe-4817-874c-506f5dc4d2e7(kube-system/kube-proxy-b82ch) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-b82ch"
	E0924 00:02:25.222635       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-b82ch\": pod kube-proxy-b82ch is already assigned to node \"ha-959539-m03\"" pod="kube-system/kube-proxy-b82ch"
	I0924 00:02:25.222722       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-b82ch" node="ha-959539-m03"
	E0924 00:02:26.361885       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f(kube-system/kindnet-g4nkw) is in the cache, so can't be assumed" pod="kube-system/kindnet-g4nkw"
	E0924 00:02:26.362043       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f(kube-system/kindnet-g4nkw) is in the cache, so can't be assumed" pod="kube-system/kindnet-g4nkw"
	I0924 00:02:26.362147       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-g4nkw" node="ha-959539-m03"
	E0924 00:02:52.586244       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-m5qhr\": pod busybox-7dff88458-m5qhr is already assigned to node \"ha-959539-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-m5qhr" node="ha-959539-m02"
	E0924 00:02:52.586487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-m5qhr\": pod busybox-7dff88458-m5qhr is already assigned to node \"ha-959539-m02\"" pod="default/busybox-7dff88458-m5qhr"
	E0924 00:02:52.609367       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7q7xr\": pod busybox-7dff88458-7q7xr is already assigned to node \"ha-959539\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7q7xr" node="ha-959539"
	E0924 00:02:52.609752       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a(default/busybox-7dff88458-7q7xr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-7q7xr"
	E0924 00:02:52.609813       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7q7xr\": pod busybox-7dff88458-7q7xr is already assigned to node \"ha-959539\"" pod="default/busybox-7dff88458-7q7xr"
	I0924 00:02:52.609856       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7q7xr" node="ha-959539"
	E0924 00:03:31.974702       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:31.975081       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9594238c-336e-479f-8424-bf5663475f7d(kube-system/kube-proxy-h87p2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h87p2"
	E0924 00:03:31.975198       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" pod="kube-system/kube-proxy-h87p2"
	I0924 00:03:31.975297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:32.025106       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zfglg" node="ha-959539-m04"
	E0924 00:03:32.025246       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" pod="kube-system/kindnet-zfglg"
	
	
	==> kubelet <==
	Sep 24 00:05:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:05:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:05:12 ha-959539 kubelet[1310]: E0924 00:05:12.631688    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136312631299697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:12 ha-959539 kubelet[1310]: E0924 00:05:12.631721    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136312631299697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:22 ha-959539 kubelet[1310]: E0924 00:05:22.633953    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136322633526599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:22 ha-959539 kubelet[1310]: E0924 00:05:22.634395    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136322633526599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:32 ha-959539 kubelet[1310]: E0924 00:05:32.636027    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136332635686531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:32 ha-959539 kubelet[1310]: E0924 00:05:32.636067    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136332635686531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:42 ha-959539 kubelet[1310]: E0924 00:05:42.638244    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136342637928063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:42 ha-959539 kubelet[1310]: E0924 00:05:42.638707    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136342637928063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:52 ha-959539 kubelet[1310]: E0924 00:05:52.640591    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136352640129305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:52 ha-959539 kubelet[1310]: E0924 00:05:52.640630    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136352640129305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:02 ha-959539 kubelet[1310]: E0924 00:06:02.642027    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136362641594633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:02 ha-959539 kubelet[1310]: E0924 00:06:02.642364    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136362641594633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.540506    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:06:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.644146    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136372643846607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.644181    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136372643846607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:22 ha-959539 kubelet[1310]: E0924 00:06:22.646770    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136382645975347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:22 ha-959539 kubelet[1310]: E0924 00:06:22.647251    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136382645975347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:32 ha-959539 kubelet[1310]: E0924 00:06:32.649495    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136392649118233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:32 ha-959539 kubelet[1310]: E0924 00:06:32.649564    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136392649118233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-959539 -n ha-959539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-959539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.40544089s)
ha_test.go:413: expected profile "ha-959539" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-959539\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-959539\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-959539\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.231\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.71\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.244\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.183\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\
":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-959539 -n ha-959539
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 logs -n 25: (1.325600659s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m03_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m04 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp testdata/cp-test.txt                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m04_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03:/home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m03 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-959539 node stop m02 -v=7                                                     | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:59:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:59:26.807239   26218 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:59:26.807515   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:59:26.807525   26218 out.go:358] Setting ErrFile to fd 2...
	I0923 23:59:26.807529   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:59:26.807708   26218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:59:26.808255   26218 out.go:352] Setting JSON to false
	I0923 23:59:26.809081   26218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2511,"bootTime":1727133456,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:59:26.809190   26218 start.go:139] virtualization: kvm guest
	I0923 23:59:26.811490   26218 out.go:177] * [ha-959539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:59:26.813253   26218 notify.go:220] Checking for updates...
	I0923 23:59:26.813308   26218 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:59:26.814742   26218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:59:26.816098   26218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:59:26.817558   26218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:26.818772   26218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:59:26.819994   26218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:59:26.821406   26218 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:59:26.856627   26218 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 23:59:26.857800   26218 start.go:297] selected driver: kvm2
	I0923 23:59:26.857813   26218 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:59:26.857824   26218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:59:26.858493   26218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:59:26.858582   26218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:59:26.873962   26218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:59:26.874005   26218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:59:26.874238   26218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:59:26.874272   26218 cni.go:84] Creating CNI manager for ""
	I0923 23:59:26.874317   26218 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 23:59:26.874326   26218 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 23:59:26.874369   26218 start.go:340] cluster config:
	{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 23:59:26.874490   26218 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:59:26.876392   26218 out.go:177] * Starting "ha-959539" primary control-plane node in "ha-959539" cluster
	I0923 23:59:26.877566   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:59:26.877605   26218 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:59:26.877627   26218 cache.go:56] Caching tarball of preloaded images
	I0923 23:59:26.877724   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 23:59:26.877737   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 23:59:26.878058   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0923 23:59:26.878079   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json: {Name:mkb5e645fc53383c85997a2cb75a196eaec42645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:26.878228   26218 start.go:360] acquireMachinesLock for ha-959539: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 23:59:26.878263   26218 start.go:364] duration metric: took 19.539µs to acquireMachinesLock for "ha-959539"
	I0923 23:59:26.878286   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:59:26.878346   26218 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 23:59:26.879811   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 23:59:26.879957   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:59:26.879996   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:59:26.894584   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0923 23:59:26.895047   26218 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:59:26.895660   26218 main.go:141] libmachine: Using API Version  1
	I0923 23:59:26.895681   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:59:26.896020   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:59:26.896226   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:26.896388   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:26.896534   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0923 23:59:26.896578   26218 client.go:168] LocalClient.Create starting
	I0923 23:59:26.896605   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0923 23:59:26.896637   26218 main.go:141] libmachine: Decoding PEM data...
	I0923 23:59:26.896658   26218 main.go:141] libmachine: Parsing certificate...
	I0923 23:59:26.896703   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0923 23:59:26.896727   26218 main.go:141] libmachine: Decoding PEM data...
	I0923 23:59:26.896739   26218 main.go:141] libmachine: Parsing certificate...
	I0923 23:59:26.896757   26218 main.go:141] libmachine: Running pre-create checks...
	I0923 23:59:26.896765   26218 main.go:141] libmachine: (ha-959539) Calling .PreCreateCheck
	I0923 23:59:26.897146   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:26.897553   26218 main.go:141] libmachine: Creating machine...
	I0923 23:59:26.897565   26218 main.go:141] libmachine: (ha-959539) Calling .Create
	I0923 23:59:26.897712   26218 main.go:141] libmachine: (ha-959539) Creating KVM machine...
	I0923 23:59:26.899261   26218 main.go:141] libmachine: (ha-959539) DBG | found existing default KVM network
	I0923 23:59:26.899973   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:26.899836   26241 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I0923 23:59:26.900022   26218 main.go:141] libmachine: (ha-959539) DBG | created network xml: 
	I0923 23:59:26.900042   26218 main.go:141] libmachine: (ha-959539) DBG | <network>
	I0923 23:59:26.900051   26218 main.go:141] libmachine: (ha-959539) DBG |   <name>mk-ha-959539</name>
	I0923 23:59:26.900066   26218 main.go:141] libmachine: (ha-959539) DBG |   <dns enable='no'/>
	I0923 23:59:26.900077   26218 main.go:141] libmachine: (ha-959539) DBG |   
	I0923 23:59:26.900085   26218 main.go:141] libmachine: (ha-959539) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 23:59:26.900097   26218 main.go:141] libmachine: (ha-959539) DBG |     <dhcp>
	I0923 23:59:26.900105   26218 main.go:141] libmachine: (ha-959539) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 23:59:26.900116   26218 main.go:141] libmachine: (ha-959539) DBG |     </dhcp>
	I0923 23:59:26.900122   26218 main.go:141] libmachine: (ha-959539) DBG |   </ip>
	I0923 23:59:26.900132   26218 main.go:141] libmachine: (ha-959539) DBG |   
	I0923 23:59:26.900140   26218 main.go:141] libmachine: (ha-959539) DBG | </network>
	I0923 23:59:26.900211   26218 main.go:141] libmachine: (ha-959539) DBG | 
	I0923 23:59:26.905213   26218 main.go:141] libmachine: (ha-959539) DBG | trying to create private KVM network mk-ha-959539 192.168.39.0/24...
	I0923 23:59:26.977916   26218 main.go:141] libmachine: (ha-959539) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 ...
	I0923 23:59:26.977955   26218 main.go:141] libmachine: (ha-959539) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0923 23:59:26.977972   26218 main.go:141] libmachine: (ha-959539) DBG | private KVM network mk-ha-959539 192.168.39.0/24 created
	I0923 23:59:26.977988   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:26.977847   26241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:26.978009   26218 main.go:141] libmachine: (ha-959539) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0923 23:59:27.232339   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.232194   26241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa...
	I0923 23:59:27.673404   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.673251   26241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/ha-959539.rawdisk...
	I0923 23:59:27.673433   26218 main.go:141] libmachine: (ha-959539) DBG | Writing magic tar header
	I0923 23:59:27.673445   26218 main.go:141] libmachine: (ha-959539) DBG | Writing SSH key tar header
	I0923 23:59:27.673465   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.673358   26241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 ...
	I0923 23:59:27.673485   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 (perms=drwx------)
	I0923 23:59:27.673503   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539
	I0923 23:59:27.673514   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0923 23:59:27.673524   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0923 23:59:27.673532   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0923 23:59:27.673541   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 23:59:27.673551   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0923 23:59:27.673563   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:27.673577   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0923 23:59:27.673589   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 23:59:27.673598   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 23:59:27.673607   26218 main.go:141] libmachine: (ha-959539) Creating domain...
	I0923 23:59:27.673616   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins
	I0923 23:59:27.673623   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home
	I0923 23:59:27.673640   26218 main.go:141] libmachine: (ha-959539) DBG | Skipping /home - not owner
	I0923 23:59:27.674680   26218 main.go:141] libmachine: (ha-959539) define libvirt domain using xml: 
	I0923 23:59:27.674695   26218 main.go:141] libmachine: (ha-959539) <domain type='kvm'>
	I0923 23:59:27.674701   26218 main.go:141] libmachine: (ha-959539)   <name>ha-959539</name>
	I0923 23:59:27.674705   26218 main.go:141] libmachine: (ha-959539)   <memory unit='MiB'>2200</memory>
	I0923 23:59:27.674740   26218 main.go:141] libmachine: (ha-959539)   <vcpu>2</vcpu>
	I0923 23:59:27.674764   26218 main.go:141] libmachine: (ha-959539)   <features>
	I0923 23:59:27.674777   26218 main.go:141] libmachine: (ha-959539)     <acpi/>
	I0923 23:59:27.674788   26218 main.go:141] libmachine: (ha-959539)     <apic/>
	I0923 23:59:27.674801   26218 main.go:141] libmachine: (ha-959539)     <pae/>
	I0923 23:59:27.674828   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.674851   26218 main.go:141] libmachine: (ha-959539)   </features>
	I0923 23:59:27.674870   26218 main.go:141] libmachine: (ha-959539)   <cpu mode='host-passthrough'>
	I0923 23:59:27.674879   26218 main.go:141] libmachine: (ha-959539)   
	I0923 23:59:27.674889   26218 main.go:141] libmachine: (ha-959539)   </cpu>
	I0923 23:59:27.674905   26218 main.go:141] libmachine: (ha-959539)   <os>
	I0923 23:59:27.674917   26218 main.go:141] libmachine: (ha-959539)     <type>hvm</type>
	I0923 23:59:27.674943   26218 main.go:141] libmachine: (ha-959539)     <boot dev='cdrom'/>
	I0923 23:59:27.674960   26218 main.go:141] libmachine: (ha-959539)     <boot dev='hd'/>
	I0923 23:59:27.674974   26218 main.go:141] libmachine: (ha-959539)     <bootmenu enable='no'/>
	I0923 23:59:27.674985   26218 main.go:141] libmachine: (ha-959539)   </os>
	I0923 23:59:27.674997   26218 main.go:141] libmachine: (ha-959539)   <devices>
	I0923 23:59:27.675009   26218 main.go:141] libmachine: (ha-959539)     <disk type='file' device='cdrom'>
	I0923 23:59:27.675024   26218 main.go:141] libmachine: (ha-959539)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/boot2docker.iso'/>
	I0923 23:59:27.675037   26218 main.go:141] libmachine: (ha-959539)       <target dev='hdc' bus='scsi'/>
	I0923 23:59:27.675049   26218 main.go:141] libmachine: (ha-959539)       <readonly/>
	I0923 23:59:27.675060   26218 main.go:141] libmachine: (ha-959539)     </disk>
	I0923 23:59:27.675075   26218 main.go:141] libmachine: (ha-959539)     <disk type='file' device='disk'>
	I0923 23:59:27.675088   26218 main.go:141] libmachine: (ha-959539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 23:59:27.675111   26218 main.go:141] libmachine: (ha-959539)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/ha-959539.rawdisk'/>
	I0923 23:59:27.675127   26218 main.go:141] libmachine: (ha-959539)       <target dev='hda' bus='virtio'/>
	I0923 23:59:27.675141   26218 main.go:141] libmachine: (ha-959539)     </disk>
	I0923 23:59:27.675152   26218 main.go:141] libmachine: (ha-959539)     <interface type='network'>
	I0923 23:59:27.675165   26218 main.go:141] libmachine: (ha-959539)       <source network='mk-ha-959539'/>
	I0923 23:59:27.675175   26218 main.go:141] libmachine: (ha-959539)       <model type='virtio'/>
	I0923 23:59:27.675185   26218 main.go:141] libmachine: (ha-959539)     </interface>
	I0923 23:59:27.675192   26218 main.go:141] libmachine: (ha-959539)     <interface type='network'>
	I0923 23:59:27.675201   26218 main.go:141] libmachine: (ha-959539)       <source network='default'/>
	I0923 23:59:27.675206   26218 main.go:141] libmachine: (ha-959539)       <model type='virtio'/>
	I0923 23:59:27.675210   26218 main.go:141] libmachine: (ha-959539)     </interface>
	I0923 23:59:27.675217   26218 main.go:141] libmachine: (ha-959539)     <serial type='pty'>
	I0923 23:59:27.675222   26218 main.go:141] libmachine: (ha-959539)       <target port='0'/>
	I0923 23:59:27.675228   26218 main.go:141] libmachine: (ha-959539)     </serial>
	I0923 23:59:27.675247   26218 main.go:141] libmachine: (ha-959539)     <console type='pty'>
	I0923 23:59:27.675254   26218 main.go:141] libmachine: (ha-959539)       <target type='serial' port='0'/>
	I0923 23:59:27.675259   26218 main.go:141] libmachine: (ha-959539)     </console>
	I0923 23:59:27.675262   26218 main.go:141] libmachine: (ha-959539)     <rng model='virtio'>
	I0923 23:59:27.675273   26218 main.go:141] libmachine: (ha-959539)       <backend model='random'>/dev/random</backend>
	I0923 23:59:27.675279   26218 main.go:141] libmachine: (ha-959539)     </rng>
	I0923 23:59:27.675284   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.675289   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.675306   26218 main.go:141] libmachine: (ha-959539)   </devices>
	I0923 23:59:27.675324   26218 main.go:141] libmachine: (ha-959539) </domain>
	I0923 23:59:27.675341   26218 main.go:141] libmachine: (ha-959539) 
	I0923 23:59:27.679682   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:f8:7e:29 in network default
	I0923 23:59:27.680257   26218 main.go:141] libmachine: (ha-959539) Ensuring networks are active...
	I0923 23:59:27.680301   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:27.680992   26218 main.go:141] libmachine: (ha-959539) Ensuring network default is active
	I0923 23:59:27.681339   26218 main.go:141] libmachine: (ha-959539) Ensuring network mk-ha-959539 is active
	I0923 23:59:27.681827   26218 main.go:141] libmachine: (ha-959539) Getting domain xml...
	I0923 23:59:27.682529   26218 main.go:141] libmachine: (ha-959539) Creating domain...
	I0923 23:59:28.880638   26218 main.go:141] libmachine: (ha-959539) Waiting to get IP...
	I0923 23:59:28.881412   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:28.881793   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:28.881827   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:28.881764   26241 retry.go:31] will retry after 258.264646ms: waiting for machine to come up
	I0923 23:59:29.141441   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.141781   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.141818   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.141725   26241 retry.go:31] will retry after 275.827745ms: waiting for machine to come up
	I0923 23:59:29.419197   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.419582   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.419610   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.419535   26241 retry.go:31] will retry after 461.76652ms: waiting for machine to come up
	I0923 23:59:29.883216   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.883789   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.883811   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.883726   26241 retry.go:31] will retry after 445.570936ms: waiting for machine to come up
	I0923 23:59:30.331342   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:30.331760   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:30.331789   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:30.331719   26241 retry.go:31] will retry after 749.255419ms: waiting for machine to come up
	I0923 23:59:31.082478   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:31.082950   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:31.082971   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:31.082889   26241 retry.go:31] will retry after 773.348958ms: waiting for machine to come up
	I0923 23:59:31.857788   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:31.858274   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:31.858300   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:31.858204   26241 retry.go:31] will retry after 752.285326ms: waiting for machine to come up
	I0923 23:59:32.611583   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:32.612075   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:32.612098   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:32.612034   26241 retry.go:31] will retry after 1.137504115s: waiting for machine to come up
	I0923 23:59:33.751665   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:33.751976   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:33.752009   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:33.751932   26241 retry.go:31] will retry after 1.241947238s: waiting for machine to come up
	I0923 23:59:34.995017   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:34.995386   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:34.995400   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:34.995360   26241 retry.go:31] will retry after 1.449064591s: waiting for machine to come up
	I0923 23:59:36.446933   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:36.447337   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:36.447388   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:36.447302   26241 retry.go:31] will retry after 2.693587186s: waiting for machine to come up
	I0923 23:59:39.144265   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:39.144685   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:39.144701   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:39.144641   26241 retry.go:31] will retry after 2.637044367s: waiting for machine to come up
	I0923 23:59:41.785491   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:41.785902   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:41.785918   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:41.785859   26241 retry.go:31] will retry after 4.357362487s: waiting for machine to come up
	I0923 23:59:46.147970   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:46.148484   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:46.148509   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:46.148440   26241 retry.go:31] will retry after 4.358423196s: waiting for machine to come up
	I0923 23:59:50.510236   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.510860   26218 main.go:141] libmachine: (ha-959539) Found IP for machine: 192.168.39.231
	I0923 23:59:50.510881   26218 main.go:141] libmachine: (ha-959539) Reserving static IP address...
	I0923 23:59:50.510893   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has current primary IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.511347   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find host DHCP lease matching {name: "ha-959539", mac: "52:54:00:99:17:69", ip: "192.168.39.231"} in network mk-ha-959539
	I0923 23:59:50.583983   26218 main.go:141] libmachine: (ha-959539) DBG | Getting to WaitForSSH function...
	I0923 23:59:50.584012   26218 main.go:141] libmachine: (ha-959539) Reserved static IP address: 192.168.39.231
	I0923 23:59:50.584024   26218 main.go:141] libmachine: (ha-959539) Waiting for SSH to be available...
	I0923 23:59:50.587176   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.587581   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.587613   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.587727   26218 main.go:141] libmachine: (ha-959539) DBG | Using SSH client type: external
	I0923 23:59:50.587740   26218 main.go:141] libmachine: (ha-959539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa (-rw-------)
	I0923 23:59:50.587808   26218 main.go:141] libmachine: (ha-959539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 23:59:50.587835   26218 main.go:141] libmachine: (ha-959539) DBG | About to run SSH command:
	I0923 23:59:50.587849   26218 main.go:141] libmachine: (ha-959539) DBG | exit 0
	I0923 23:59:50.716142   26218 main.go:141] libmachine: (ha-959539) DBG | SSH cmd err, output: <nil>: 
	I0923 23:59:50.716469   26218 main.go:141] libmachine: (ha-959539) KVM machine creation complete!
	I0923 23:59:50.716772   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:50.717437   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:50.717627   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:50.717783   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 23:59:50.717794   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0923 23:59:50.719003   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 23:59:50.719017   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 23:59:50.719040   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 23:59:50.719051   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.721609   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.721907   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.721928   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.722195   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.722412   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.722565   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.722658   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.722805   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.723011   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.723021   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 23:59:50.835498   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:59:50.835520   26218 main.go:141] libmachine: Detecting the provisioner...
	I0923 23:59:50.835527   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.838284   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.838621   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.838642   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.838906   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.839085   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.839257   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.839424   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.839565   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.839743   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.839754   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 23:59:50.953371   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 23:59:50.953486   26218 main.go:141] libmachine: found compatible host: buildroot
	I0923 23:59:50.953499   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0923 23:59:50.953509   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:50.953724   26218 buildroot.go:166] provisioning hostname "ha-959539"
	I0923 23:59:50.953757   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:50.953954   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.956724   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.957082   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.957105   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.957309   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.957497   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.957638   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.957763   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.957932   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.958118   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.958139   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539 && echo "ha-959539" | sudo tee /etc/hostname
	I0923 23:59:51.087322   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539
	
	I0923 23:59:51.087357   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.090134   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.090488   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.090514   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.090720   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.090906   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.091125   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.091383   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.091616   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.091783   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.091798   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 23:59:51.216710   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:59:51.216741   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0923 23:59:51.216763   26218 buildroot.go:174] setting up certificates
	I0923 23:59:51.216772   26218 provision.go:84] configureAuth start
	I0923 23:59:51.216781   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:51.217050   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:51.219973   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.220311   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.220350   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.220472   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.223154   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.223541   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.223574   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.223732   26218 provision.go:143] copyHostCerts
	I0923 23:59:51.223760   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0923 23:59:51.223790   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0923 23:59:51.223807   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0923 23:59:51.223875   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0923 23:59:51.223951   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0923 23:59:51.223969   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0923 23:59:51.223976   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0923 23:59:51.223999   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0923 23:59:51.224038   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0923 23:59:51.224055   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0923 23:59:51.224060   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0923 23:59:51.224079   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0923 23:59:51.224140   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539 san=[127.0.0.1 192.168.39.231 ha-959539 localhost minikube]
	I0923 23:59:51.458115   26218 provision.go:177] copyRemoteCerts
	I0923 23:59:51.458172   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 23:59:51.458199   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.461001   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.461333   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.461358   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.461510   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.461701   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.461849   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.461970   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:51.550490   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 23:59:51.550562   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 23:59:51.574382   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 23:59:51.574471   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 23:59:51.597413   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 23:59:51.597507   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 23:59:51.620181   26218 provision.go:87] duration metric: took 403.395464ms to configureAuth
	I0923 23:59:51.620213   26218 buildroot.go:189] setting minikube options for container-runtime
	I0923 23:59:51.620452   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:59:51.620525   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.623330   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.623655   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.623683   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.623826   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.624031   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.624209   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.624360   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.624502   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.624659   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.624677   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 23:59:51.851847   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 23:59:51.851876   26218 main.go:141] libmachine: Checking connection to Docker...
	I0923 23:59:51.851883   26218 main.go:141] libmachine: (ha-959539) Calling .GetURL
	I0923 23:59:51.853119   26218 main.go:141] libmachine: (ha-959539) DBG | Using libvirt version 6000000
	I0923 23:59:51.855099   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.855420   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.855446   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.855586   26218 main.go:141] libmachine: Docker is up and running!
	I0923 23:59:51.855598   26218 main.go:141] libmachine: Reticulating splines...
	I0923 23:59:51.855605   26218 client.go:171] duration metric: took 24.959018357s to LocalClient.Create
	I0923 23:59:51.855625   26218 start.go:167] duration metric: took 24.959098074s to libmachine.API.Create "ha-959539"
	I0923 23:59:51.855634   26218 start.go:293] postStartSetup for "ha-959539" (driver="kvm2")
	I0923 23:59:51.855643   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:59:51.855656   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:51.855887   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:59:51.855913   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.858133   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.858438   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.858461   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.858627   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.858801   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.858953   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.859096   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:51.946855   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 23:59:51.950980   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 23:59:51.951009   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0923 23:59:51.951065   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0923 23:59:51.951158   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0923 23:59:51.951168   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0923 23:59:51.951319   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 23:59:51.960703   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0923 23:59:51.984127   26218 start.go:296] duration metric: took 128.479072ms for postStartSetup
	I0923 23:59:51.984203   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:51.984890   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:51.987429   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.987719   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.987746   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.987964   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0923 23:59:51.988154   26218 start.go:128] duration metric: took 25.109799181s to createHost
	I0923 23:59:51.988175   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.990588   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.990906   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.990929   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.991056   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.991238   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.991353   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.991456   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.991563   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.991778   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.991794   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 23:59:52.105105   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727135992.084651186
	
	I0923 23:59:52.105126   26218 fix.go:216] guest clock: 1727135992.084651186
	I0923 23:59:52.105133   26218 fix.go:229] Guest: 2024-09-23 23:59:52.084651186 +0000 UTC Remote: 2024-09-23 23:59:51.988165076 +0000 UTC m=+25.216110625 (delta=96.48611ms)
	I0923 23:59:52.105151   26218 fix.go:200] guest clock delta is within tolerance: 96.48611ms
	I0923 23:59:52.105156   26218 start.go:83] releasing machines lock for "ha-959539", held for 25.226882318s
	I0923 23:59:52.105171   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.105409   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:52.108347   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.108704   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.108728   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.108925   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109448   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109621   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109725   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 23:59:52.109775   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:52.109834   26218 ssh_runner.go:195] Run: cat /version.json
	I0923 23:59:52.109859   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:52.112538   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112714   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112781   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.112818   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112933   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:52.113055   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.113086   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.113164   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:52.113281   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:52.113341   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:52.113438   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:52.113503   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:52.113559   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:52.113735   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:52.193560   26218 ssh_runner.go:195] Run: systemctl --version
	I0923 23:59:52.235438   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 23:59:52.389606   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 23:59:52.396083   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 23:59:52.396147   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:59:52.413066   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 23:59:52.413095   26218 start.go:495] detecting cgroup driver to use...
	I0923 23:59:52.413158   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 23:59:52.429335   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 23:59:52.443813   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0923 23:59:52.443866   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 23:59:52.457675   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 23:59:52.471149   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 23:59:52.585355   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 23:59:52.737118   26218 docker.go:233] disabling docker service ...
	I0923 23:59:52.737174   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 23:59:52.752411   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 23:59:52.765194   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 23:59:52.901170   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 23:59:53.018250   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 23:59:53.031932   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:59:53.049015   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 23:59:53.049085   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.058948   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 23:59:53.059015   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.069147   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.079197   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.089022   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:59:53.100410   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.111370   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.128755   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.138944   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:59:53.149267   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 23:59:53.149363   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 23:59:53.163279   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:59:53.173965   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:59:53.305956   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 23:59:53.410170   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 23:59:53.410232   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 23:59:53.415034   26218 start.go:563] Will wait 60s for crictl version
	I0923 23:59:53.415112   26218 ssh_runner.go:195] Run: which crictl
	I0923 23:59:53.418927   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 23:59:53.464205   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 23:59:53.464285   26218 ssh_runner.go:195] Run: crio --version
	I0923 23:59:53.494495   26218 ssh_runner.go:195] Run: crio --version
	I0923 23:59:53.523488   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 23:59:53.524781   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:53.527608   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:53.527945   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:53.527972   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:53.528223   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 23:59:53.532189   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:59:53.544235   26218 kubeadm.go:883] updating cluster {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:59:53.544347   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:59:53.544395   26218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:59:53.574815   26218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 23:59:53.574879   26218 ssh_runner.go:195] Run: which lz4
	I0923 23:59:53.578616   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 23:59:53.578693   26218 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 23:59:53.582683   26218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 23:59:53.582711   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 23:59:54.823072   26218 crio.go:462] duration metric: took 1.244398494s to copy over tarball
	I0923 23:59:54.823158   26218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 23:59:56.834165   26218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.010972437s)
	I0923 23:59:56.834200   26218 crio.go:469] duration metric: took 2.011094658s to extract the tarball
	I0923 23:59:56.834211   26218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 23:59:56.870476   26218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:59:56.915807   26218 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 23:59:56.915830   26218 cache_images.go:84] Images are preloaded, skipping loading
	I0923 23:59:56.915839   26218 kubeadm.go:934] updating node { 192.168.39.231 8443 v1.31.1 crio true true} ...
	I0923 23:59:56.915955   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 23:59:56.916032   26218 ssh_runner.go:195] Run: crio config
	I0923 23:59:56.959047   26218 cni.go:84] Creating CNI manager for ""
	I0923 23:59:56.959065   26218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 23:59:56.959075   26218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:59:56.959102   26218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-959539 NodeName:ha-959539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:59:56.959278   26218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-959539"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:59:56.959306   26218 kube-vip.go:115] generating kube-vip config ...
	I0923 23:59:56.959355   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 23:59:56.975413   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 23:59:56.975538   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 23:59:56.975609   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:59:56.985748   26218 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 23:59:56.985816   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 23:59:56.994858   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 23:59:57.011080   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:59:57.026929   26218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 23:59:57.042586   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 23:59:57.058931   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 23:59:57.062598   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:59:57.074372   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:59:57.199368   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:59:57.215790   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.231
	I0923 23:59:57.215808   26218 certs.go:194] generating shared ca certs ...
	I0923 23:59:57.215839   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.215971   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0923 23:59:57.216007   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0923 23:59:57.216016   26218 certs.go:256] generating profile certs ...
	I0923 23:59:57.216061   26218 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0923 23:59:57.216073   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt with IP's: []
	I0923 23:59:57.346653   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt ...
	I0923 23:59:57.346676   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt: {Name:mkab4515ea7168cda846b9bfb46262aeaac2bc0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.346833   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key ...
	I0923 23:59:57.346843   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key: {Name:mke7708261b70539d80260dff7c5f1bd958774aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.346914   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b
	I0923 23:59:57.346929   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.254]
	I0923 23:59:57.635327   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b ...
	I0923 23:59:57.635354   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b: {Name:mk5117d1a9a492c25c6b0e468e2bf78a6f60d1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.635505   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b ...
	I0923 23:59:57.635516   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b: {Name:mk3539984a0fdd5eeb79a51663bcd250a224ff95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.635580   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0923 23:59:57.635646   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0923 23:59:57.635698   26218 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0923 23:59:57.635711   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt with IP's: []
	I0923 23:59:57.894945   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt ...
	I0923 23:59:57.894975   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt: {Name:mkc0621f207c72302b780ca13cb5032341f4b069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.895138   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key ...
	I0923 23:59:57.895150   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key: {Name:mkf18d3b3341960faadac2faed03cef051112574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.895217   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 23:59:57.895235   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 23:59:57.895245   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 23:59:57.895265   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 23:59:57.895277   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 23:59:57.895287   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 23:59:57.895299   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 23:59:57.895310   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 23:59:57.895353   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0923 23:59:57.895393   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0923 23:59:57.895403   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 23:59:57.895425   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0923 23:59:57.895449   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:59:57.895469   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0923 23:59:57.895505   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0923 23:59:57.895531   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0923 23:59:57.895542   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0923 23:59:57.895555   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:57.896068   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:59:57.920516   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 23:59:57.944180   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:59:57.973439   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 23:59:58.001892   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 23:59:58.026752   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 23:59:58.049022   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:59:58.071861   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 23:59:58.094850   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0923 23:59:58.120029   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0923 23:59:58.144719   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:59:58.174622   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:59:58.192664   26218 ssh_runner.go:195] Run: openssl version
	I0923 23:59:58.198435   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0923 23:59:58.208675   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.212997   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.213048   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.218554   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0923 23:59:58.228984   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0923 23:59:58.239539   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.244140   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.244200   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.249770   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 23:59:58.260444   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:59:58.271376   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.276012   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.276066   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.281610   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:59:58.291931   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:59:58.295609   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:59:58.295656   26218 kubeadm.go:392] StartCluster: {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:59:58.295736   26218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 23:59:58.295803   26218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 23:59:58.331462   26218 cri.go:89] found id: ""
	I0923 23:59:58.331531   26218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:59:58.341582   26218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:59:58.351079   26218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:59:58.360870   26218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:59:58.360891   26218 kubeadm.go:157] found existing configuration files:
	
	I0923 23:59:58.360931   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:59:58.370007   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:59:58.370064   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:59:58.379658   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:59:58.388923   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:59:58.388982   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:59:58.398781   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:59:58.407722   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:59:58.407786   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:59:58.417271   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:59:58.426264   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:59:58.426322   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:59:58.435999   26218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 23:59:58.546770   26218 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:59:58.546896   26218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:59:58.658868   26218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:59:58.659029   26218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:59:58.659118   26218 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:59:58.667816   26218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:59:58.762200   26218 out.go:235]   - Generating certificates and keys ...
	I0923 23:59:58.762295   26218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:59:58.762371   26218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:59:58.762428   26218 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:59:58.931425   26218 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:59:59.169435   26218 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:59:59.368885   26218 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:59:59.910983   26218 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:59:59.911147   26218 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-959539 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0924 00:00:00.027247   26218 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 00:00:00.027385   26218 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-959539 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0924 00:00:00.408901   26218 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 00:00:00.695628   26218 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 00:00:01.084765   26218 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 00:00:01.084831   26218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:00:01.198400   26218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:00:01.455815   26218 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 00:00:01.707214   26218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:00:01.761069   26218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:00:01.868085   26218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:00:01.868536   26218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:00:01.872192   26218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:00:01.874381   26218 out.go:235]   - Booting up control plane ...
	I0924 00:00:01.874504   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:00:01.874578   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:00:01.874634   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:00:01.890454   26218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:00:01.897634   26218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:00:01.897699   26218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:00:02.038440   26218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 00:00:02.038603   26218 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 00:00:02.541646   26218 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.471901ms
	I0924 00:00:02.541770   26218 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 00:00:11.738795   26218 kubeadm.go:310] [api-check] The API server is healthy after 9.198818169s
	I0924 00:00:11.752392   26218 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 00:00:11.768902   26218 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 00:00:11.811138   26218 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 00:00:11.811397   26218 kubeadm.go:310] [mark-control-plane] Marking the node ha-959539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 00:00:11.828918   26218 kubeadm.go:310] [bootstrap-token] Using token: a2tynl.1ohol4x4auhbv6gq
	I0924 00:00:11.830685   26218 out.go:235]   - Configuring RBAC rules ...
	I0924 00:00:11.830831   26218 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 00:00:11.844590   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 00:00:11.854514   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 00:00:11.858483   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 00:00:11.862691   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 00:00:11.866723   26218 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 00:00:12.143692   26218 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 00:00:12.683818   26218 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 00:00:13.148491   26218 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 00:00:13.149475   26218 kubeadm.go:310] 
	I0924 00:00:13.149539   26218 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 00:00:13.149548   26218 kubeadm.go:310] 
	I0924 00:00:13.149650   26218 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 00:00:13.149658   26218 kubeadm.go:310] 
	I0924 00:00:13.149681   26218 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 00:00:13.149743   26218 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 00:00:13.149832   26218 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 00:00:13.149862   26218 kubeadm.go:310] 
	I0924 00:00:13.149949   26218 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 00:00:13.149959   26218 kubeadm.go:310] 
	I0924 00:00:13.150027   26218 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 00:00:13.150036   26218 kubeadm.go:310] 
	I0924 00:00:13.150112   26218 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 00:00:13.150219   26218 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 00:00:13.150313   26218 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 00:00:13.150324   26218 kubeadm.go:310] 
	I0924 00:00:13.150430   26218 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 00:00:13.150539   26218 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 00:00:13.150551   26218 kubeadm.go:310] 
	I0924 00:00:13.150661   26218 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a2tynl.1ohol4x4auhbv6gq \
	I0924 00:00:13.150808   26218 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 00:00:13.150846   26218 kubeadm.go:310] 	--control-plane 
	I0924 00:00:13.150856   26218 kubeadm.go:310] 
	I0924 00:00:13.150970   26218 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 00:00:13.150989   26218 kubeadm.go:310] 
	I0924 00:00:13.151100   26218 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a2tynl.1ohol4x4auhbv6gq \
	I0924 00:00:13.151239   26218 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 00:00:13.152162   26218 kubeadm.go:310] W0923 23:59:58.529397     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:00:13.152583   26218 kubeadm.go:310] W0923 23:59:58.530304     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:00:13.152731   26218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:00:13.152765   26218 cni.go:84] Creating CNI manager for ""
	I0924 00:00:13.152776   26218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 00:00:13.154438   26218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 00:00:13.155646   26218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 00:00:13.161171   26218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 00:00:13.161193   26218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 00:00:13.184460   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 00:00:13.668553   26218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 00:00:13.668646   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:00:13.668716   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539 minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=true
	I0924 00:00:13.906100   26218 ops.go:34] apiserver oom_adj: -16
	I0924 00:00:13.906236   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:00:14.026723   26218 kubeadm.go:1113] duration metric: took 358.135167ms to wait for elevateKubeSystemPrivileges
	I0924 00:00:14.026757   26218 kubeadm.go:394] duration metric: took 15.731103406s to StartCluster
	I0924 00:00:14.026778   26218 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:14.026862   26218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:00:14.027452   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:14.027658   26218 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:14.027668   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 00:00:14.027688   26218 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 00:00:14.027758   26218 addons.go:69] Setting storage-provisioner=true in profile "ha-959539"
	I0924 00:00:14.027782   26218 addons.go:234] Setting addon storage-provisioner=true in "ha-959539"
	I0924 00:00:14.027808   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:14.027677   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:00:14.027850   26218 addons.go:69] Setting default-storageclass=true in profile "ha-959539"
	I0924 00:00:14.027872   26218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-959539"
	I0924 00:00:14.027940   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:14.028248   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.028262   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.028289   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.028388   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.043826   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0924 00:00:14.043826   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0924 00:00:14.044412   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.044444   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.044897   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.044921   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.045026   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.045048   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.045272   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.045342   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.045440   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.045899   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.045941   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.047486   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:00:14.047712   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 00:00:14.048174   26218 cert_rotation.go:140] Starting client certificate rotation controller
	I0924 00:00:14.048284   26218 addons.go:234] Setting addon default-storageclass=true in "ha-959539"
	I0924 00:00:14.048319   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:14.048595   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.048634   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.062043   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0924 00:00:14.062493   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.063046   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.063070   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.063429   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.063717   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.064022   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0924 00:00:14.064526   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.064977   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.065001   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.065303   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.065800   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:14.065914   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.065960   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.067886   26218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:00:14.069203   26218 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:00:14.069223   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 00:00:14.069245   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:14.072558   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.072961   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:14.072982   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.073163   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:14.073338   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:14.073491   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:14.073620   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:14.082767   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0924 00:00:14.083265   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.083864   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.083889   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.084221   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.084481   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.086186   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:14.086413   26218 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 00:00:14.086430   26218 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 00:00:14.086447   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:14.089541   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.089980   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:14.090010   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.090151   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:14.090333   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:14.090551   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:14.090735   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:14.208938   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 00:00:14.243343   26218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:00:14.328202   26218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 00:00:14.719009   26218 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 00:00:15.026630   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.026666   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.026684   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.026706   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.026978   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027033   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027049   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.027059   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.027104   26218 main.go:141] libmachine: (ha-959539) DBG | Closing plugin on server side
	I0924 00:00:15.027152   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027174   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027183   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.027191   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.027272   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027294   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027390   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027404   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027434   26218 main.go:141] libmachine: (ha-959539) DBG | Closing plugin on server side
	I0924 00:00:15.027454   26218 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 00:00:15.027470   26218 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 00:00:15.027568   26218 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0924 00:00:15.027574   26218 round_trippers.go:469] Request Headers:
	I0924 00:00:15.027581   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:00:15.027585   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:00:15.042627   26218 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0924 00:00:15.043249   26218 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0924 00:00:15.043266   26218 round_trippers.go:469] Request Headers:
	I0924 00:00:15.043284   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:00:15.043295   26218 round_trippers.go:473]     Content-Type: application/json
	I0924 00:00:15.043300   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:00:15.047076   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:00:15.047250   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.047265   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.047499   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.047522   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.049462   26218 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 00:00:15.050768   26218 addons.go:510] duration metric: took 1.023080124s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 00:00:15.050804   26218 start.go:246] waiting for cluster config update ...
	I0924 00:00:15.050819   26218 start.go:255] writing updated cluster config ...
	I0924 00:00:15.052488   26218 out.go:201] 
	I0924 00:00:15.054069   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:15.054138   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:15.056020   26218 out.go:177] * Starting "ha-959539-m02" control-plane node in "ha-959539" cluster
	I0924 00:00:15.057275   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:00:15.057294   26218 cache.go:56] Caching tarball of preloaded images
	I0924 00:00:15.057386   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:00:15.057396   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:00:15.057456   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:15.057614   26218 start.go:360] acquireMachinesLock for ha-959539-m02: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:00:15.057654   26218 start.go:364] duration metric: took 22.109µs to acquireMachinesLock for "ha-959539-m02"
	I0924 00:00:15.057669   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:15.057726   26218 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0924 00:00:15.059302   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:00:15.059377   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:15.059408   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:15.074812   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0924 00:00:15.075196   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:15.075683   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:15.075703   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:15.076029   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:15.076222   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:15.076403   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:15.076562   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0924 00:00:15.076593   26218 client.go:168] LocalClient.Create starting
	I0924 00:00:15.076633   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:00:15.076673   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:00:15.076695   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:00:15.076755   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:00:15.076782   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:00:15.076796   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:00:15.076816   26218 main.go:141] libmachine: Running pre-create checks...
	I0924 00:00:15.076827   26218 main.go:141] libmachine: (ha-959539-m02) Calling .PreCreateCheck
	I0924 00:00:15.076957   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:15.077329   26218 main.go:141] libmachine: Creating machine...
	I0924 00:00:15.077346   26218 main.go:141] libmachine: (ha-959539-m02) Calling .Create
	I0924 00:00:15.077491   26218 main.go:141] libmachine: (ha-959539-m02) Creating KVM machine...
	I0924 00:00:15.078735   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found existing default KVM network
	I0924 00:00:15.078908   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found existing private KVM network mk-ha-959539
	I0924 00:00:15.079005   26218 main.go:141] libmachine: (ha-959539-m02) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 ...
	I0924 00:00:15.079050   26218 main.go:141] libmachine: (ha-959539-m02) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:00:15.079067   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.078949   26566 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:00:15.079117   26218 main.go:141] libmachine: (ha-959539-m02) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:00:15.323293   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.323139   26566 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa...
	I0924 00:00:15.574063   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.573935   26566 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/ha-959539-m02.rawdisk...
	I0924 00:00:15.574096   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Writing magic tar header
	I0924 00:00:15.574106   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Writing SSH key tar header
	I0924 00:00:15.574114   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.574047   26566 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 ...
	I0924 00:00:15.574234   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 (perms=drwx------)
	I0924 00:00:15.574263   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:00:15.574274   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02
	I0924 00:00:15.574301   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:00:15.574318   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:00:15.574331   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:00:15.574341   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:00:15.574351   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:00:15.574358   26218 main.go:141] libmachine: (ha-959539-m02) Creating domain...
	I0924 00:00:15.574368   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:00:15.574373   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:00:15.574383   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:00:15.574388   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:00:15.574397   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home
	I0924 00:00:15.574402   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Skipping /home - not owner
	I0924 00:00:15.575397   26218 main.go:141] libmachine: (ha-959539-m02) define libvirt domain using xml: 
	I0924 00:00:15.575418   26218 main.go:141] libmachine: (ha-959539-m02) <domain type='kvm'>
	I0924 00:00:15.575426   26218 main.go:141] libmachine: (ha-959539-m02)   <name>ha-959539-m02</name>
	I0924 00:00:15.575433   26218 main.go:141] libmachine: (ha-959539-m02)   <memory unit='MiB'>2200</memory>
	I0924 00:00:15.575441   26218 main.go:141] libmachine: (ha-959539-m02)   <vcpu>2</vcpu>
	I0924 00:00:15.575446   26218 main.go:141] libmachine: (ha-959539-m02)   <features>
	I0924 00:00:15.575454   26218 main.go:141] libmachine: (ha-959539-m02)     <acpi/>
	I0924 00:00:15.575461   26218 main.go:141] libmachine: (ha-959539-m02)     <apic/>
	I0924 00:00:15.575476   26218 main.go:141] libmachine: (ha-959539-m02)     <pae/>
	I0924 00:00:15.575486   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575497   26218 main.go:141] libmachine: (ha-959539-m02)   </features>
	I0924 00:00:15.575507   26218 main.go:141] libmachine: (ha-959539-m02)   <cpu mode='host-passthrough'>
	I0924 00:00:15.575514   26218 main.go:141] libmachine: (ha-959539-m02)   
	I0924 00:00:15.575526   26218 main.go:141] libmachine: (ha-959539-m02)   </cpu>
	I0924 00:00:15.575536   26218 main.go:141] libmachine: (ha-959539-m02)   <os>
	I0924 00:00:15.575543   26218 main.go:141] libmachine: (ha-959539-m02)     <type>hvm</type>
	I0924 00:00:15.575556   26218 main.go:141] libmachine: (ha-959539-m02)     <boot dev='cdrom'/>
	I0924 00:00:15.575573   26218 main.go:141] libmachine: (ha-959539-m02)     <boot dev='hd'/>
	I0924 00:00:15.575585   26218 main.go:141] libmachine: (ha-959539-m02)     <bootmenu enable='no'/>
	I0924 00:00:15.575595   26218 main.go:141] libmachine: (ha-959539-m02)   </os>
	I0924 00:00:15.575608   26218 main.go:141] libmachine: (ha-959539-m02)   <devices>
	I0924 00:00:15.575620   26218 main.go:141] libmachine: (ha-959539-m02)     <disk type='file' device='cdrom'>
	I0924 00:00:15.575642   26218 main.go:141] libmachine: (ha-959539-m02)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/boot2docker.iso'/>
	I0924 00:00:15.575655   26218 main.go:141] libmachine: (ha-959539-m02)       <target dev='hdc' bus='scsi'/>
	I0924 00:00:15.575665   26218 main.go:141] libmachine: (ha-959539-m02)       <readonly/>
	I0924 00:00:15.575675   26218 main.go:141] libmachine: (ha-959539-m02)     </disk>
	I0924 00:00:15.575691   26218 main.go:141] libmachine: (ha-959539-m02)     <disk type='file' device='disk'>
	I0924 00:00:15.575706   26218 main.go:141] libmachine: (ha-959539-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:00:15.575717   26218 main.go:141] libmachine: (ha-959539-m02)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/ha-959539-m02.rawdisk'/>
	I0924 00:00:15.575725   26218 main.go:141] libmachine: (ha-959539-m02)       <target dev='hda' bus='virtio'/>
	I0924 00:00:15.575732   26218 main.go:141] libmachine: (ha-959539-m02)     </disk>
	I0924 00:00:15.575744   26218 main.go:141] libmachine: (ha-959539-m02)     <interface type='network'>
	I0924 00:00:15.575752   26218 main.go:141] libmachine: (ha-959539-m02)       <source network='mk-ha-959539'/>
	I0924 00:00:15.575780   26218 main.go:141] libmachine: (ha-959539-m02)       <model type='virtio'/>
	I0924 00:00:15.575803   26218 main.go:141] libmachine: (ha-959539-m02)     </interface>
	I0924 00:00:15.575828   26218 main.go:141] libmachine: (ha-959539-m02)     <interface type='network'>
	I0924 00:00:15.575848   26218 main.go:141] libmachine: (ha-959539-m02)       <source network='default'/>
	I0924 00:00:15.575861   26218 main.go:141] libmachine: (ha-959539-m02)       <model type='virtio'/>
	I0924 00:00:15.575871   26218 main.go:141] libmachine: (ha-959539-m02)     </interface>
	I0924 00:00:15.575880   26218 main.go:141] libmachine: (ha-959539-m02)     <serial type='pty'>
	I0924 00:00:15.575890   26218 main.go:141] libmachine: (ha-959539-m02)       <target port='0'/>
	I0924 00:00:15.575898   26218 main.go:141] libmachine: (ha-959539-m02)     </serial>
	I0924 00:00:15.575907   26218 main.go:141] libmachine: (ha-959539-m02)     <console type='pty'>
	I0924 00:00:15.575916   26218 main.go:141] libmachine: (ha-959539-m02)       <target type='serial' port='0'/>
	I0924 00:00:15.575929   26218 main.go:141] libmachine: (ha-959539-m02)     </console>
	I0924 00:00:15.575941   26218 main.go:141] libmachine: (ha-959539-m02)     <rng model='virtio'>
	I0924 00:00:15.575953   26218 main.go:141] libmachine: (ha-959539-m02)       <backend model='random'>/dev/random</backend>
	I0924 00:00:15.575961   26218 main.go:141] libmachine: (ha-959539-m02)     </rng>
	I0924 00:00:15.575970   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575977   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575986   26218 main.go:141] libmachine: (ha-959539-m02)   </devices>
	I0924 00:00:15.575994   26218 main.go:141] libmachine: (ha-959539-m02) </domain>
	I0924 00:00:15.576006   26218 main.go:141] libmachine: (ha-959539-m02) 
	I0924 00:00:15.585706   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:4f:cb:25 in network default
	I0924 00:00:15.586358   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring networks are active...
	I0924 00:00:15.586382   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:15.588682   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring network default is active
	I0924 00:00:15.589090   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring network mk-ha-959539 is active
	I0924 00:00:15.589485   26218 main.go:141] libmachine: (ha-959539-m02) Getting domain xml...
	I0924 00:00:15.590356   26218 main.go:141] libmachine: (ha-959539-m02) Creating domain...
	I0924 00:00:16.876850   26218 main.go:141] libmachine: (ha-959539-m02) Waiting to get IP...
	I0924 00:00:16.877600   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:16.878025   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:16.878048   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:16.878002   26566 retry.go:31] will retry after 206.511357ms: waiting for machine to come up
	I0924 00:00:17.086726   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.087176   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.087210   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.087160   26566 retry.go:31] will retry after 339.485484ms: waiting for machine to come up
	I0924 00:00:17.428879   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.429496   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.429530   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.429442   26566 retry.go:31] will retry after 355.763587ms: waiting for machine to come up
	I0924 00:00:17.787147   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.787637   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.787665   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.787594   26566 retry.go:31] will retry after 608.491101ms: waiting for machine to come up
	I0924 00:00:18.397336   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:18.397814   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:18.397840   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:18.397785   26566 retry.go:31] will retry after 502.478814ms: waiting for machine to come up
	I0924 00:00:18.901642   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:18.902265   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:18.902291   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:18.902211   26566 retry.go:31] will retry after 818.203447ms: waiting for machine to come up
	I0924 00:00:19.722162   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:19.722608   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:19.722629   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:19.722558   26566 retry.go:31] will retry after 929.046384ms: waiting for machine to come up
	I0924 00:00:20.653489   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:20.653984   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:20.654008   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:20.653948   26566 retry.go:31] will retry after 1.409190678s: waiting for machine to come up
	I0924 00:00:22.065332   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:22.065896   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:22.065920   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:22.065833   26566 retry.go:31] will retry after 1.614499189s: waiting for machine to come up
	I0924 00:00:23.681862   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:23.682319   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:23.682363   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:23.682234   26566 retry.go:31] will retry after 1.460062243s: waiting for machine to come up
	I0924 00:00:25.144293   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:25.144745   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:25.144767   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:25.144697   26566 retry.go:31] will retry after 1.777929722s: waiting for machine to come up
	I0924 00:00:26.924735   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:26.925200   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:26.925237   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:26.925162   26566 retry.go:31] will retry after 3.141763872s: waiting for machine to come up
	I0924 00:00:30.069494   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:30.070014   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:30.070036   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:30.069955   26566 retry.go:31] will retry after 3.647403595s: waiting for machine to come up
	I0924 00:00:33.721303   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:33.721786   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:33.721804   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:33.721753   26566 retry.go:31] will retry after 4.027076232s: waiting for machine to come up
	I0924 00:00:37.752592   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.753064   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has current primary IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.753095   26218 main.go:141] libmachine: (ha-959539-m02) Found IP for machine: 192.168.39.71
	I0924 00:00:37.753104   26218 main.go:141] libmachine: (ha-959539-m02) Reserving static IP address...
	I0924 00:00:37.753574   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find host DHCP lease matching {name: "ha-959539-m02", mac: "52:54:00:7e:17:08", ip: "192.168.39.71"} in network mk-ha-959539
	I0924 00:00:37.827442   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Getting to WaitForSSH function...
	I0924 00:00:37.827474   26218 main.go:141] libmachine: (ha-959539-m02) Reserved static IP address: 192.168.39.71
	I0924 00:00:37.827486   26218 main.go:141] libmachine: (ha-959539-m02) Waiting for SSH to be available...
	I0924 00:00:37.830110   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.830505   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:37.830530   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.830672   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using SSH client type: external
	I0924 00:00:37.830710   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa (-rw-------)
	I0924 00:00:37.830778   26218 main.go:141] libmachine: (ha-959539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:00:37.830803   26218 main.go:141] libmachine: (ha-959539-m02) DBG | About to run SSH command:
	I0924 00:00:37.830826   26218 main.go:141] libmachine: (ha-959539-m02) DBG | exit 0
	I0924 00:00:37.960544   26218 main.go:141] libmachine: (ha-959539-m02) DBG | SSH cmd err, output: <nil>: 
	I0924 00:00:37.960821   26218 main.go:141] libmachine: (ha-959539-m02) KVM machine creation complete!
	I0924 00:00:37.961319   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:37.961983   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:37.962222   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:37.962419   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:00:37.962460   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetState
	I0924 00:00:37.963697   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:00:37.963714   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:00:37.963734   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:00:37.963742   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:37.966078   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.966462   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:37.966483   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.966660   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:37.966813   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:37.966945   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:37.967054   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:37.967205   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:37.967481   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:37.967492   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:00:38.079589   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:00:38.079610   26218 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:00:38.079617   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.082503   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.082929   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.082950   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.083140   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.083340   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.083509   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.083666   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.083825   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.083986   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.083997   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:00:38.197000   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:00:38.197103   26218 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:00:38.197116   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:00:38.197126   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.197376   26218 buildroot.go:166] provisioning hostname "ha-959539-m02"
	I0924 00:00:38.197411   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.197604   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.200444   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.200771   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.200795   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.200984   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.201176   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.201357   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.201493   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.201648   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.201800   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.201815   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539-m02 && echo "ha-959539-m02" | sudo tee /etc/hostname
	I0924 00:00:38.325460   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539-m02
	
	I0924 00:00:38.325485   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.328105   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.328475   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.328501   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.328664   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.328838   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.329112   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.329333   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.329513   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.329688   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.329704   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:00:38.449811   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:00:38.449850   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:00:38.449870   26218 buildroot.go:174] setting up certificates
	I0924 00:00:38.449890   26218 provision.go:84] configureAuth start
	I0924 00:00:38.449902   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.450206   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:38.453211   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.453603   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.453632   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.453799   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.456450   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.456868   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.456897   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.457045   26218 provision.go:143] copyHostCerts
	I0924 00:00:38.457081   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:00:38.457120   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:00:38.457131   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:00:38.457206   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:00:38.457299   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:00:38.457319   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:00:38.457327   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:00:38.457353   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:00:38.457401   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:00:38.457420   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:00:38.457427   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:00:38.457450   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:00:38.457543   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539-m02 san=[127.0.0.1 192.168.39.71 ha-959539-m02 localhost minikube]
	I0924 00:00:38.700010   26218 provision.go:177] copyRemoteCerts
	I0924 00:00:38.700077   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:00:38.700106   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.703047   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.703677   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.703706   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.703938   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.704136   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.704273   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.704412   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:38.790480   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:00:38.790557   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:00:38.814753   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:00:38.814837   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 00:00:38.838252   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:00:38.838325   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 00:00:38.861203   26218 provision.go:87] duration metric: took 411.299288ms to configureAuth
	I0924 00:00:38.861229   26218 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:00:38.861474   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:38.861569   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.864432   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.864889   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.864918   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.865150   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.865356   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.865560   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.865731   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.865903   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.866055   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.866068   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:00:39.108025   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:00:39.108048   26218 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:00:39.108055   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetURL
	I0924 00:00:39.109415   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using libvirt version 6000000
	I0924 00:00:39.111778   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.112117   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.112136   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.112442   26218 main.go:141] libmachine: Docker is up and running!
	I0924 00:00:39.112459   26218 main.go:141] libmachine: Reticulating splines...
	I0924 00:00:39.112465   26218 client.go:171] duration metric: took 24.035864378s to LocalClient.Create
	I0924 00:00:39.112488   26218 start.go:167] duration metric: took 24.035928123s to libmachine.API.Create "ha-959539"
	I0924 00:00:39.112505   26218 start.go:293] postStartSetup for "ha-959539-m02" (driver="kvm2")
	I0924 00:00:39.112530   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:00:39.112552   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.112758   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:00:39.112780   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.115333   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.115725   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.115753   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.115918   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.116088   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.116213   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.116357   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.202485   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:00:39.206952   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:00:39.206985   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:00:39.207071   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:00:39.207148   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:00:39.207163   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:00:39.207242   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:00:39.216574   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:00:39.239506   26218 start.go:296] duration metric: took 126.985038ms for postStartSetup
	I0924 00:00:39.239558   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:39.240153   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:39.242816   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.243178   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.243207   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.243507   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:39.243767   26218 start.go:128] duration metric: took 24.186030679s to createHost
	I0924 00:00:39.243797   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.246320   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.246794   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.246819   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.246947   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.247124   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.247283   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.247416   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.247561   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:39.247714   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:39.247724   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:00:39.360845   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136039.320054599
	
	I0924 00:00:39.360875   26218 fix.go:216] guest clock: 1727136039.320054599
	I0924 00:00:39.360884   26218 fix.go:229] Guest: 2024-09-24 00:00:39.320054599 +0000 UTC Remote: 2024-09-24 00:00:39.243782701 +0000 UTC m=+72.471728258 (delta=76.271898ms)
	I0924 00:00:39.360910   26218 fix.go:200] guest clock delta is within tolerance: 76.271898ms
	I0924 00:00:39.360916   26218 start.go:83] releasing machines lock for "ha-959539-m02", held for 24.303253954s
	I0924 00:00:39.360955   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.361201   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:39.363900   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.364402   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.364444   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.366881   26218 out.go:177] * Found network options:
	I0924 00:00:39.368856   26218 out.go:177]   - NO_PROXY=192.168.39.231
	W0924 00:00:39.370661   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:00:39.370699   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371263   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371455   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371538   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:00:39.371594   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	W0924 00:00:39.371611   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:00:39.371685   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:00:39.371706   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.374357   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374663   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374694   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.374712   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374850   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.375045   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.375085   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.375111   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.375202   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.375362   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.375377   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.375561   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.375696   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.375813   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.627921   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:00:39.633495   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:00:39.633553   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:00:39.648951   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:00:39.648983   26218 start.go:495] detecting cgroup driver to use...
	I0924 00:00:39.649040   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:00:39.665083   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:00:39.679257   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:00:39.679308   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:00:39.692687   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:00:39.705979   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:00:39.817630   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:00:39.947466   26218 docker.go:233] disabling docker service ...
	I0924 00:00:39.947532   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:00:39.969264   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:00:39.982704   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:00:40.112775   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:00:40.227163   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:00:40.240677   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:00:40.258433   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:00:40.258483   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.268957   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:00:40.269028   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.279413   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.289512   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.299715   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:00:40.310010   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.320219   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.336748   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.346864   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:00:40.355761   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:00:40.355825   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:00:40.368724   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:00:40.378522   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:00:40.486107   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:00:40.577907   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:00:40.577981   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:00:40.582555   26218 start.go:563] Will wait 60s for crictl version
	I0924 00:00:40.582622   26218 ssh_runner.go:195] Run: which crictl
	I0924 00:00:40.586219   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:00:40.622719   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:00:40.622812   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:00:40.650450   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:00:40.681082   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:00:40.682576   26218 out.go:177]   - env NO_PROXY=192.168.39.231
	I0924 00:00:40.683809   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:40.686666   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:40.687065   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:40.687087   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:40.687306   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:00:40.691475   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:00:40.703474   26218 mustload.go:65] Loading cluster: ha-959539
	I0924 00:00:40.703695   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:40.703966   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:40.704003   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:40.718859   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0924 00:00:40.719296   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:40.719825   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:40.719845   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:40.720145   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:40.720370   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:40.721815   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:40.722094   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:40.722128   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:40.736945   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0924 00:00:40.737421   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:40.737905   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:40.737924   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:40.738222   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:40.738511   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:40.738689   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.71
	I0924 00:00:40.738704   26218 certs.go:194] generating shared ca certs ...
	I0924 00:00:40.738719   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:40.738861   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:00:40.738903   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:00:40.738915   26218 certs.go:256] generating profile certs ...
	I0924 00:00:40.738991   26218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:00:40.739018   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0
	I0924 00:00:40.739035   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.254]
	I0924 00:00:41.143984   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 ...
	I0924 00:00:41.144014   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0: {Name:mk20b6843b0401b0c56e7890c984fa68d261314f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:41.144175   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0 ...
	I0924 00:00:41.144188   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0: {Name:mk7575fb7ddfde936c86d46545e958478f16edb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:41.144260   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:00:41.144430   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:00:41.144555   26218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:00:41.144571   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:00:41.144584   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:00:41.144594   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:00:41.144605   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:00:41.144615   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:00:41.144625   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:00:41.144635   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:00:41.144645   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:00:41.144688   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:00:41.144720   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:00:41.144729   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:00:41.144749   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:00:41.144772   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:00:41.144793   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:00:41.144829   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:00:41.144853   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.144868   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.144880   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.144915   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:41.148030   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:41.148427   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:41.148454   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:41.148614   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:41.148808   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:41.149000   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:41.149135   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:41.228803   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 00:00:41.233988   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 00:00:41.244943   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 00:00:41.249126   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0924 00:00:41.259697   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 00:00:41.263836   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 00:00:41.275144   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 00:00:41.279454   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 00:00:41.290396   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 00:00:41.295094   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 00:00:41.307082   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 00:00:41.310877   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 00:00:41.325438   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:00:41.350629   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:00:41.374907   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:00:41.399716   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:00:41.424061   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 00:00:41.447992   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:00:41.471662   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:00:41.494955   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:00:41.517872   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:00:41.540286   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:00:41.563177   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:00:41.585906   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 00:00:41.601283   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0924 00:00:41.617635   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 00:00:41.633218   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 00:00:41.648995   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 00:00:41.664675   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 00:00:41.680596   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 00:00:41.696250   26218 ssh_runner.go:195] Run: openssl version
	I0924 00:00:41.701694   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:00:41.711789   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.716030   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.716101   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.721933   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:00:41.732158   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:00:41.742443   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.746788   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.746839   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.752121   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:00:41.763012   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:00:41.774793   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.779310   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.779366   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.784990   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:00:41.795333   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:00:41.799293   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:00:41.799344   26218 kubeadm.go:934] updating node {m02 192.168.39.71 8443 v1.31.1 crio true true} ...
	I0924 00:00:41.799409   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:00:41.799432   26218 kube-vip.go:115] generating kube-vip config ...
	I0924 00:00:41.799464   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:00:41.816587   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:00:41.816663   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:00:41.816743   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:00:41.827548   26218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 00:00:41.827613   26218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 00:00:41.837289   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 00:00:41.837325   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:00:41.837335   26218 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0924 00:00:41.837374   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:00:41.837335   26218 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0924 00:00:41.841429   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 00:00:41.841451   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 00:00:42.671785   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:00:42.671868   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:00:42.676727   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 00:00:42.676769   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 00:00:42.782086   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:00:42.829038   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:00:42.829147   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:00:42.840769   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 00:00:42.840809   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 00:00:43.263339   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 00:00:43.276175   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 00:00:43.295973   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:00:43.314983   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:00:43.331751   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:00:43.335923   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:00:43.347682   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:00:43.465742   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:00:43.485298   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:43.485784   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:43.485844   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:43.501576   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0924 00:00:43.502143   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:43.502637   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:43.502661   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:43.502992   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:43.503177   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:43.503343   26218 start.go:317] joinCluster: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:00:43.503440   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 00:00:43.503454   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:43.506923   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:43.507450   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:43.507479   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:43.507654   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:43.507814   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:43.507940   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:43.508061   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:43.662724   26218 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:43.662763   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pid2mx.knnb3pqsxosow7jx --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m02 --control-plane --apiserver-advertise-address=192.168.39.71 --apiserver-bind-port=8443"
	I0924 00:01:07.367829   26218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pid2mx.knnb3pqsxosow7jx --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m02 --control-plane --apiserver-advertise-address=192.168.39.71 --apiserver-bind-port=8443": (23.705046169s)
	I0924 00:01:07.367865   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 00:01:07.953375   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539-m02 minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=false
	I0924 00:01:08.091888   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-959539-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 00:01:08.215534   26218 start.go:319] duration metric: took 24.71218473s to joinCluster
	I0924 00:01:08.215627   26218 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:01:08.215925   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:01:08.218104   26218 out.go:177] * Verifying Kubernetes components...
	I0924 00:01:08.219304   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:01:08.515326   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:01:08.536625   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:01:08.536894   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 00:01:08.536951   26218 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.231:8443
	I0924 00:01:08.537167   26218 node_ready.go:35] waiting up to 6m0s for node "ha-959539-m02" to be "Ready" ...
	I0924 00:01:08.537285   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:08.537301   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:08.537312   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:08.537318   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:08.545839   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:01:09.037697   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:09.037724   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:09.037735   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:09.037744   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:09.045511   26218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0924 00:01:09.538147   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:09.538175   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:09.538188   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:09.538195   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:09.545313   26218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0924 00:01:10.038238   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:10.038262   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:10.038270   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:10.038274   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:10.041715   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:10.538175   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:10.538205   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:10.538219   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:10.538224   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:10.541872   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:10.542370   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:11.037630   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:11.037679   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:11.037691   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:11.037696   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:11.041245   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:11.538259   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:11.538294   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:11.538302   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:11.538307   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:11.541611   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:12.038188   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:12.038209   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:12.038216   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:12.038221   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:12.041674   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:12.537618   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:12.537637   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:12.537645   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:12.537655   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:12.541319   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:13.037995   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:13.038016   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:13.038025   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:13.038028   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:13.041345   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:13.042019   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:13.537769   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:13.537794   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:13.537805   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:13.537811   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:13.541685   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:14.037855   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:14.037878   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:14.037887   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:14.037891   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:14.288753   26218 round_trippers.go:574] Response Status: 200 OK in 250 milliseconds
	I0924 00:01:14.538102   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:14.538126   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:14.538137   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:14.538145   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:14.541469   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.037484   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:15.037516   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:15.037537   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:15.037541   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:15.040833   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.537646   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:15.537676   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:15.537694   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:15.537700   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:15.541088   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.541719   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:16.037867   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:16.037898   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:16.037910   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:16.037916   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:16.041934   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:16.537983   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:16.538008   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:16.538018   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:16.538026   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:16.542888   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:17.037795   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:17.037815   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:17.037823   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:17.037826   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:17.040833   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:17.537691   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:17.537714   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:17.537721   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:17.537727   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:17.540858   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:18.037970   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:18.037995   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:18.038031   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:18.038036   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:18.041329   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:18.042104   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:18.537909   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:18.537934   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:18.537947   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:18.537953   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:18.541524   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:19.037353   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:19.037406   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:19.037417   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:19.037421   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:19.040693   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:19.537691   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:19.537713   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:19.537721   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:19.537725   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:19.541362   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:20.038258   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:20.038281   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:20.038289   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:20.038293   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:20.041505   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:20.042205   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:20.538173   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:20.538196   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:20.538204   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:20.538208   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:20.541444   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:21.038308   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:21.038332   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:21.038340   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:21.038345   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:21.041591   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:21.537466   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:21.537490   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:21.537498   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:21.537507   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:21.541243   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:22.037776   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:22.037798   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:22.037806   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:22.037809   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:22.041584   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:22.537387   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:22.537410   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:22.537419   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:22.537423   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:22.540436   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:22.540915   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:23.038376   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:23.038396   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:23.038404   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:23.038408   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:23.042386   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:23.537841   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:23.537863   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:23.537871   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:23.537876   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:23.540735   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:24.037766   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:24.037791   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:24.037800   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:24.037805   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:24.041574   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:24.537636   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:24.537662   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:24.537674   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:24.537679   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:24.540714   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:24.541302   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:25.037447   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:25.037470   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:25.037487   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:25.037491   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:25.040959   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:25.538316   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:25.538358   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:25.538366   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:25.538370   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:25.542089   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.037942   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:26.037965   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:26.037972   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:26.037977   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:26.041187   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.538316   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:26.538337   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:26.538344   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:26.538347   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:26.541682   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.542279   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:27.037486   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.037511   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.037519   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.037523   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.040661   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.041287   26218 node_ready.go:49] node "ha-959539-m02" has status "Ready":"True"
	I0924 00:01:27.041311   26218 node_ready.go:38] duration metric: took 18.504110454s for node "ha-959539-m02" to be "Ready" ...
	I0924 00:01:27.041320   26218 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:01:27.041412   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:27.041422   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.041429   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.041433   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.045587   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:27.053524   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.053610   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nkbzw
	I0924 00:01:27.053618   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.053626   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.053630   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.056737   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.057414   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.057431   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.057440   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.057448   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.059974   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.060671   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.060693   26218 pod_ready.go:82] duration metric: took 7.143278ms for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.060705   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.060770   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ss8lg
	I0924 00:01:27.060779   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.060786   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.060789   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.063296   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.064025   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.064042   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.064052   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.064057   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.066509   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.067043   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.067072   26218 pod_ready.go:82] duration metric: took 6.358417ms for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.067085   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.067169   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539
	I0924 00:01:27.067180   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.067191   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.067197   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.069632   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.070349   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.070365   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.070372   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.070376   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.072726   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.073202   26218 pod_ready.go:93] pod "etcd-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.073221   26218 pod_ready.go:82] duration metric: took 6.128232ms for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.073233   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.073304   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m02
	I0924 00:01:27.073314   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.073325   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.073334   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.075606   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.076170   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.076186   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.076196   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.076203   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.078974   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.079404   26218 pod_ready.go:93] pod "etcd-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.079423   26218 pod_ready.go:82] duration metric: took 6.178632ms for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.079441   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.237846   26218 request.go:632] Waited for 158.344773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:01:27.237906   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:01:27.237912   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.237919   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.237923   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.241325   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.438393   26218 request.go:632] Waited for 196.447833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.438479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.438489   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.438501   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.438509   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.447385   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:01:27.447843   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.447861   26218 pod_ready.go:82] duration metric: took 368.411985ms for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.447873   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.638213   26218 request.go:632] Waited for 190.264015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:01:27.638314   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:01:27.638323   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.638331   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.638335   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.641724   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.837671   26218 request.go:632] Waited for 195.307183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.837734   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.837741   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.837750   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.837755   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.841548   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.842107   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.842125   26218 pod_ready.go:82] duration metric: took 394.244431ms for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.842138   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.038308   26218 request.go:632] Waited for 196.100963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:01:28.038387   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:01:28.038399   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.038408   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.038413   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.041906   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.238014   26218 request.go:632] Waited for 195.403449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:28.238083   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:28.238090   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.238099   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.238104   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.241379   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.241947   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:28.241968   26218 pod_ready.go:82] duration metric: took 399.822644ms for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.241981   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.438107   26218 request.go:632] Waited for 196.054162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:01:28.438177   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:01:28.438183   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.438190   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.438194   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.441695   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.637747   26218 request.go:632] Waited for 195.402574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:28.637812   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:28.637820   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.637829   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.637836   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.641728   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.642165   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:28.642185   26218 pod_ready.go:82] duration metric: took 400.196003ms for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.642198   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.838364   26218 request.go:632] Waited for 196.098536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:01:28.838423   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:01:28.838429   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.838440   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.838445   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.842064   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.038288   26218 request.go:632] Waited for 195.408876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:29.038362   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:29.038367   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.038375   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.038380   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.041612   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.042184   26218 pod_ready.go:93] pod "kube-proxy-2hlqx" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.042207   26218 pod_ready.go:82] duration metric: took 400.003061ms for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.042217   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.238379   26218 request.go:632] Waited for 196.098313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:01:29.238479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:01:29.238489   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.238500   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.238510   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.241789   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.437898   26218 request.go:632] Waited for 195.388277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.437950   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.437962   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.437970   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.437982   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.441497   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.442152   26218 pod_ready.go:93] pod "kube-proxy-qzklc" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.442170   26218 pod_ready.go:82] duration metric: took 399.946814ms for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.442179   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.638206   26218 request.go:632] Waited for 195.95793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:01:29.638276   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:01:29.638285   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.638295   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.638300   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.641784   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.837816   26218 request.go:632] Waited for 195.394257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.837907   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.837916   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.837926   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.837932   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.841128   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.841709   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.841729   26218 pod_ready.go:82] duration metric: took 399.544232ms for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.841739   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:30.037891   26218 request.go:632] Waited for 196.07048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:01:30.037962   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:01:30.037970   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.037980   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.037987   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.041465   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:30.237753   26218 request.go:632] Waited for 195.552862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:30.237806   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:30.237812   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.237819   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.237823   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.240960   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:30.241506   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:30.241525   26218 pod_ready.go:82] duration metric: took 399.780224ms for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:30.241536   26218 pod_ready.go:39] duration metric: took 3.200205293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:01:30.241549   26218 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:01:30.241608   26218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:01:30.261278   26218 api_server.go:72] duration metric: took 22.045614649s to wait for apiserver process to appear ...
	I0924 00:01:30.261301   26218 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:01:30.261325   26218 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0924 00:01:30.266130   26218 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0924 00:01:30.266207   26218 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I0924 00:01:30.266217   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.266227   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.266234   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.267131   26218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 00:01:30.267273   26218 api_server.go:141] control plane version: v1.31.1
	I0924 00:01:30.267296   26218 api_server.go:131] duration metric: took 5.986583ms to wait for apiserver health ...
	I0924 00:01:30.267305   26218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:01:30.437651   26218 request.go:632] Waited for 170.278154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.437728   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.437734   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.437752   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.437756   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.443228   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:01:30.447360   26218 system_pods.go:59] 17 kube-system pods found
	I0924 00:01:30.447395   26218 system_pods.go:61] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:01:30.447400   26218 system_pods.go:61] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:01:30.447404   26218 system_pods.go:61] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:01:30.447407   26218 system_pods.go:61] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:01:30.447410   26218 system_pods.go:61] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:01:30.447413   26218 system_pods.go:61] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:01:30.447417   26218 system_pods.go:61] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:01:30.447420   26218 system_pods.go:61] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:01:30.447422   26218 system_pods.go:61] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:01:30.447427   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:01:30.447430   26218 system_pods.go:61] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:01:30.447433   26218 system_pods.go:61] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:01:30.447436   26218 system_pods.go:61] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:01:30.447439   26218 system_pods.go:61] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:01:30.447442   26218 system_pods.go:61] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:01:30.447445   26218 system_pods.go:61] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:01:30.447448   26218 system_pods.go:61] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:01:30.447453   26218 system_pods.go:74] duration metric: took 180.140131ms to wait for pod list to return data ...
	I0924 00:01:30.447461   26218 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:01:30.637950   26218 request.go:632] Waited for 190.394034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:01:30.638006   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:01:30.638012   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.638022   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.638028   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.642084   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:30.642345   26218 default_sa.go:45] found service account: "default"
	I0924 00:01:30.642362   26218 default_sa.go:55] duration metric: took 194.895557ms for default service account to be created ...
	I0924 00:01:30.642370   26218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:01:30.838482   26218 request.go:632] Waited for 196.04318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.838565   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.838573   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.838585   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.838597   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.842832   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:30.848939   26218 system_pods.go:86] 17 kube-system pods found
	I0924 00:01:30.848970   26218 system_pods.go:89] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:01:30.848979   26218 system_pods.go:89] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:01:30.848983   26218 system_pods.go:89] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:01:30.848988   26218 system_pods.go:89] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:01:30.848991   26218 system_pods.go:89] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:01:30.848995   26218 system_pods.go:89] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:01:30.848999   26218 system_pods.go:89] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:01:30.849002   26218 system_pods.go:89] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:01:30.849006   26218 system_pods.go:89] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:01:30.849009   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:01:30.849014   26218 system_pods.go:89] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:01:30.849019   26218 system_pods.go:89] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:01:30.849023   26218 system_pods.go:89] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:01:30.849027   26218 system_pods.go:89] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:01:30.849031   26218 system_pods.go:89] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:01:30.849034   26218 system_pods.go:89] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:01:30.849039   26218 system_pods.go:89] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:01:30.849049   26218 system_pods.go:126] duration metric: took 206.674401ms to wait for k8s-apps to be running ...
	I0924 00:01:30.849059   26218 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:01:30.849103   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:01:30.865711   26218 system_svc.go:56] duration metric: took 16.641461ms WaitForService to wait for kubelet
	I0924 00:01:30.865749   26218 kubeadm.go:582] duration metric: took 22.650087813s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:01:30.865771   26218 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:01:31.038193   26218 request.go:632] Waited for 172.328437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I0924 00:01:31.038258   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I0924 00:01:31.038266   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:31.038277   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:31.038283   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:31.042103   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:31.042950   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:01:31.042977   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:01:31.042995   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:01:31.042998   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:01:31.043002   26218 node_conditions.go:105] duration metric: took 177.226673ms to run NodePressure ...
	I0924 00:01:31.043015   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:01:31.043037   26218 start.go:255] writing updated cluster config ...
	I0924 00:01:31.044981   26218 out.go:201] 
	I0924 00:01:31.046376   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:01:31.046461   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:01:31.048054   26218 out.go:177] * Starting "ha-959539-m03" control-plane node in "ha-959539" cluster
	I0924 00:01:31.049402   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:01:31.049432   26218 cache.go:56] Caching tarball of preloaded images
	I0924 00:01:31.049548   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:01:31.049578   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:01:31.049684   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:01:31.049896   26218 start.go:360] acquireMachinesLock for ha-959539-m03: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:01:31.049951   26218 start.go:364] duration metric: took 34.777µs to acquireMachinesLock for "ha-959539-m03"
	I0924 00:01:31.049975   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:01:31.050075   26218 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0924 00:01:31.051498   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:01:31.051601   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:01:31.051641   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:01:31.066868   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0924 00:01:31.067407   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:01:31.067856   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:01:31.067875   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:01:31.068226   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:01:31.068427   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:31.068578   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:31.068733   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0924 00:01:31.068760   26218 client.go:168] LocalClient.Create starting
	I0924 00:01:31.068788   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:01:31.068825   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:01:31.068839   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:01:31.068884   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:01:31.068903   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:01:31.068913   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:01:31.068925   26218 main.go:141] libmachine: Running pre-create checks...
	I0924 00:01:31.068932   26218 main.go:141] libmachine: (ha-959539-m03) Calling .PreCreateCheck
	I0924 00:01:31.069147   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:01:31.069509   26218 main.go:141] libmachine: Creating machine...
	I0924 00:01:31.069521   26218 main.go:141] libmachine: (ha-959539-m03) Calling .Create
	I0924 00:01:31.069666   26218 main.go:141] libmachine: (ha-959539-m03) Creating KVM machine...
	I0924 00:01:31.071131   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found existing default KVM network
	I0924 00:01:31.071307   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found existing private KVM network mk-ha-959539
	I0924 00:01:31.071526   26218 main.go:141] libmachine: (ha-959539-m03) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 ...
	I0924 00:01:31.071549   26218 main.go:141] libmachine: (ha-959539-m03) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:01:31.071644   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.071506   26982 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:01:31.071719   26218 main.go:141] libmachine: (ha-959539-m03) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:01:31.300380   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.300219   26982 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa...
	I0924 00:01:31.604410   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.604272   26982 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/ha-959539-m03.rawdisk...
	I0924 00:01:31.604443   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Writing magic tar header
	I0924 00:01:31.604464   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Writing SSH key tar header
	I0924 00:01:31.604477   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.604403   26982 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 ...
	I0924 00:01:31.604563   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03
	I0924 00:01:31.604595   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 (perms=drwx------)
	I0924 00:01:31.604614   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:01:31.604630   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:01:31.604641   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:01:31.604654   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:01:31.604668   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:01:31.604679   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:01:31.604689   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home
	I0924 00:01:31.604701   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:01:31.604718   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:01:31.604730   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:01:31.604746   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Skipping /home - not owner
	I0924 00:01:31.604758   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:01:31.604771   26218 main.go:141] libmachine: (ha-959539-m03) Creating domain...
	I0924 00:01:31.605736   26218 main.go:141] libmachine: (ha-959539-m03) define libvirt domain using xml: 
	I0924 00:01:31.605756   26218 main.go:141] libmachine: (ha-959539-m03) <domain type='kvm'>
	I0924 00:01:31.605766   26218 main.go:141] libmachine: (ha-959539-m03)   <name>ha-959539-m03</name>
	I0924 00:01:31.605777   26218 main.go:141] libmachine: (ha-959539-m03)   <memory unit='MiB'>2200</memory>
	I0924 00:01:31.605784   26218 main.go:141] libmachine: (ha-959539-m03)   <vcpu>2</vcpu>
	I0924 00:01:31.605794   26218 main.go:141] libmachine: (ha-959539-m03)   <features>
	I0924 00:01:31.605802   26218 main.go:141] libmachine: (ha-959539-m03)     <acpi/>
	I0924 00:01:31.605808   26218 main.go:141] libmachine: (ha-959539-m03)     <apic/>
	I0924 00:01:31.605816   26218 main.go:141] libmachine: (ha-959539-m03)     <pae/>
	I0924 00:01:31.605822   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.605829   26218 main.go:141] libmachine: (ha-959539-m03)   </features>
	I0924 00:01:31.605840   26218 main.go:141] libmachine: (ha-959539-m03)   <cpu mode='host-passthrough'>
	I0924 00:01:31.605848   26218 main.go:141] libmachine: (ha-959539-m03)   
	I0924 00:01:31.605857   26218 main.go:141] libmachine: (ha-959539-m03)   </cpu>
	I0924 00:01:31.605887   26218 main.go:141] libmachine: (ha-959539-m03)   <os>
	I0924 00:01:31.605911   26218 main.go:141] libmachine: (ha-959539-m03)     <type>hvm</type>
	I0924 00:01:31.605921   26218 main.go:141] libmachine: (ha-959539-m03)     <boot dev='cdrom'/>
	I0924 00:01:31.605928   26218 main.go:141] libmachine: (ha-959539-m03)     <boot dev='hd'/>
	I0924 00:01:31.605940   26218 main.go:141] libmachine: (ha-959539-m03)     <bootmenu enable='no'/>
	I0924 00:01:31.605950   26218 main.go:141] libmachine: (ha-959539-m03)   </os>
	I0924 00:01:31.605957   26218 main.go:141] libmachine: (ha-959539-m03)   <devices>
	I0924 00:01:31.605968   26218 main.go:141] libmachine: (ha-959539-m03)     <disk type='file' device='cdrom'>
	I0924 00:01:31.605980   26218 main.go:141] libmachine: (ha-959539-m03)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/boot2docker.iso'/>
	I0924 00:01:31.606000   26218 main.go:141] libmachine: (ha-959539-m03)       <target dev='hdc' bus='scsi'/>
	I0924 00:01:31.606012   26218 main.go:141] libmachine: (ha-959539-m03)       <readonly/>
	I0924 00:01:31.606020   26218 main.go:141] libmachine: (ha-959539-m03)     </disk>
	I0924 00:01:31.606029   26218 main.go:141] libmachine: (ha-959539-m03)     <disk type='file' device='disk'>
	I0924 00:01:31.606038   26218 main.go:141] libmachine: (ha-959539-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:01:31.606049   26218 main.go:141] libmachine: (ha-959539-m03)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/ha-959539-m03.rawdisk'/>
	I0924 00:01:31.606056   26218 main.go:141] libmachine: (ha-959539-m03)       <target dev='hda' bus='virtio'/>
	I0924 00:01:31.606063   26218 main.go:141] libmachine: (ha-959539-m03)     </disk>
	I0924 00:01:31.606074   26218 main.go:141] libmachine: (ha-959539-m03)     <interface type='network'>
	I0924 00:01:31.606086   26218 main.go:141] libmachine: (ha-959539-m03)       <source network='mk-ha-959539'/>
	I0924 00:01:31.606092   26218 main.go:141] libmachine: (ha-959539-m03)       <model type='virtio'/>
	I0924 00:01:31.606103   26218 main.go:141] libmachine: (ha-959539-m03)     </interface>
	I0924 00:01:31.606118   26218 main.go:141] libmachine: (ha-959539-m03)     <interface type='network'>
	I0924 00:01:31.606130   26218 main.go:141] libmachine: (ha-959539-m03)       <source network='default'/>
	I0924 00:01:31.606140   26218 main.go:141] libmachine: (ha-959539-m03)       <model type='virtio'/>
	I0924 00:01:31.606179   26218 main.go:141] libmachine: (ha-959539-m03)     </interface>
	I0924 00:01:31.606200   26218 main.go:141] libmachine: (ha-959539-m03)     <serial type='pty'>
	I0924 00:01:31.606212   26218 main.go:141] libmachine: (ha-959539-m03)       <target port='0'/>
	I0924 00:01:31.606222   26218 main.go:141] libmachine: (ha-959539-m03)     </serial>
	I0924 00:01:31.606234   26218 main.go:141] libmachine: (ha-959539-m03)     <console type='pty'>
	I0924 00:01:31.606244   26218 main.go:141] libmachine: (ha-959539-m03)       <target type='serial' port='0'/>
	I0924 00:01:31.606252   26218 main.go:141] libmachine: (ha-959539-m03)     </console>
	I0924 00:01:31.606259   26218 main.go:141] libmachine: (ha-959539-m03)     <rng model='virtio'>
	I0924 00:01:31.606268   26218 main.go:141] libmachine: (ha-959539-m03)       <backend model='random'>/dev/random</backend>
	I0924 00:01:31.606286   26218 main.go:141] libmachine: (ha-959539-m03)     </rng>
	I0924 00:01:31.606292   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.606297   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.606304   26218 main.go:141] libmachine: (ha-959539-m03)   </devices>
	I0924 00:01:31.606310   26218 main.go:141] libmachine: (ha-959539-m03) </domain>
	I0924 00:01:31.606319   26218 main.go:141] libmachine: (ha-959539-m03) 
	I0924 00:01:31.613294   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:e5:53:3a in network default
	I0924 00:01:31.613858   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring networks are active...
	I0924 00:01:31.613884   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:31.614594   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring network default is active
	I0924 00:01:31.614852   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring network mk-ha-959539 is active
	I0924 00:01:31.615281   26218 main.go:141] libmachine: (ha-959539-m03) Getting domain xml...
	I0924 00:01:31.616154   26218 main.go:141] libmachine: (ha-959539-m03) Creating domain...
	I0924 00:01:32.869701   26218 main.go:141] libmachine: (ha-959539-m03) Waiting to get IP...
	I0924 00:01:32.870597   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:32.871006   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:32.871035   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:32.870993   26982 retry.go:31] will retry after 233.012319ms: waiting for machine to come up
	I0924 00:01:33.105550   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.105977   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.106051   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.105911   26982 retry.go:31] will retry after 379.213431ms: waiting for machine to come up
	I0924 00:01:33.486484   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.487004   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.487032   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.486952   26982 retry.go:31] will retry after 425.287824ms: waiting for machine to come up
	I0924 00:01:33.913409   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.913794   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.913822   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.913744   26982 retry.go:31] will retry after 517.327433ms: waiting for machine to come up
	I0924 00:01:34.432365   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:34.432967   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:34.432990   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:34.432933   26982 retry.go:31] will retry after 602.673221ms: waiting for machine to come up
	I0924 00:01:35.036831   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:35.037345   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:35.037375   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:35.037323   26982 retry.go:31] will retry after 797.600229ms: waiting for machine to come up
	I0924 00:01:35.836744   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:35.837147   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:35.837167   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:35.837118   26982 retry.go:31] will retry after 961.577188ms: waiting for machine to come up
	I0924 00:01:36.800289   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:36.800667   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:36.800730   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:36.800639   26982 retry.go:31] will retry after 936.999629ms: waiting for machine to come up
	I0924 00:01:37.740480   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:37.740978   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:37.741002   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:37.740949   26982 retry.go:31] will retry after 1.346163433s: waiting for machine to come up
	I0924 00:01:39.089423   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:39.089867   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:39.089892   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:39.089852   26982 retry.go:31] will retry after 1.874406909s: waiting for machine to come up
	I0924 00:01:40.965400   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:40.965872   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:40.965892   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:40.965827   26982 retry.go:31] will retry after 2.811212351s: waiting for machine to come up
	I0924 00:01:43.780398   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:43.780984   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:43.781006   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:43.780942   26982 retry.go:31] will retry after 2.831259444s: waiting for machine to come up
	I0924 00:01:46.613330   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:46.613716   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:46.613743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:46.613670   26982 retry.go:31] will retry after 4.008768327s: waiting for machine to come up
	I0924 00:01:50.626829   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:50.627309   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:50.627329   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:50.627284   26982 retry.go:31] will retry after 5.442842747s: waiting for machine to come up
	I0924 00:01:56.073321   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.073934   26218 main.go:141] libmachine: (ha-959539-m03) Found IP for machine: 192.168.39.244
	I0924 00:01:56.073959   26218 main.go:141] libmachine: (ha-959539-m03) Reserving static IP address...
	I0924 00:01:56.073972   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has current primary IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.074620   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find host DHCP lease matching {name: "ha-959539-m03", mac: "52:54:00:b3:b3:10", ip: "192.168.39.244"} in network mk-ha-959539
	I0924 00:01:56.148126   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Getting to WaitForSSH function...
	I0924 00:01:56.148154   26218 main.go:141] libmachine: (ha-959539-m03) Reserved static IP address: 192.168.39.244
	I0924 00:01:56.148166   26218 main.go:141] libmachine: (ha-959539-m03) Waiting for SSH to be available...
	I0924 00:01:56.150613   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.150941   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539
	I0924 00:01:56.150968   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find defined IP address of network mk-ha-959539 interface with MAC address 52:54:00:b3:b3:10
	I0924 00:01:56.151093   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH client type: external
	I0924 00:01:56.151120   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa (-rw-------)
	I0924 00:01:56.151154   26218 main.go:141] libmachine: (ha-959539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:01:56.151177   26218 main.go:141] libmachine: (ha-959539-m03) DBG | About to run SSH command:
	I0924 00:01:56.151208   26218 main.go:141] libmachine: (ha-959539-m03) DBG | exit 0
	I0924 00:01:56.154778   26218 main.go:141] libmachine: (ha-959539-m03) DBG | SSH cmd err, output: exit status 255: 
	I0924 00:01:56.154798   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 00:01:56.154804   26218 main.go:141] libmachine: (ha-959539-m03) DBG | command : exit 0
	I0924 00:01:56.154809   26218 main.go:141] libmachine: (ha-959539-m03) DBG | err     : exit status 255
	I0924 00:01:56.154815   26218 main.go:141] libmachine: (ha-959539-m03) DBG | output  : 
	I0924 00:01:59.156489   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Getting to WaitForSSH function...
	I0924 00:01:59.159051   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.159534   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.159562   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.159701   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH client type: external
	I0924 00:01:59.159729   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa (-rw-------)
	I0924 00:01:59.159765   26218 main.go:141] libmachine: (ha-959539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:01:59.159777   26218 main.go:141] libmachine: (ha-959539-m03) DBG | About to run SSH command:
	I0924 00:01:59.159792   26218 main.go:141] libmachine: (ha-959539-m03) DBG | exit 0
	I0924 00:01:59.281025   26218 main.go:141] libmachine: (ha-959539-m03) DBG | SSH cmd err, output: <nil>: 
	I0924 00:01:59.281279   26218 main.go:141] libmachine: (ha-959539-m03) KVM machine creation complete!
	I0924 00:01:59.281741   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:01:59.282322   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:59.282554   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:59.282757   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:01:59.282778   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetState
	I0924 00:01:59.284086   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:01:59.284107   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:01:59.284112   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:01:59.284118   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.286743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.287263   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.287293   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.287431   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.287597   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.287746   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.287899   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.288060   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.288359   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.288379   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:01:59.383651   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:01:59.383678   26218 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:01:59.383688   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.386650   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.387045   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.387065   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.387209   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.387419   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.387618   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.387773   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.387925   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.388113   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.388127   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:01:59.485025   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:01:59.485108   26218 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:01:59.485117   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:01:59.485124   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.485390   26218 buildroot.go:166] provisioning hostname "ha-959539-m03"
	I0924 00:01:59.485417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.485578   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.487705   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.488135   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.488163   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.488390   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.488541   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.488687   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.488842   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.489001   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.489173   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.489184   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539-m03 && echo "ha-959539-m03" | sudo tee /etc/hostname
	I0924 00:01:59.598289   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539-m03
	
	I0924 00:01:59.598334   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.601336   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.601720   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.601752   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.601887   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.602080   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.602282   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.602440   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.602632   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.602835   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.602851   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:01:59.709318   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:01:59.709354   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:01:59.709368   26218 buildroot.go:174] setting up certificates
	I0924 00:01:59.709376   26218 provision.go:84] configureAuth start
	I0924 00:01:59.709384   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.709684   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:01:59.712295   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.712675   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.712707   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.712820   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.715173   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.715598   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.715627   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.715766   26218 provision.go:143] copyHostCerts
	I0924 00:01:59.715804   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:01:59.715840   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:01:59.715850   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:01:59.715947   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:01:59.716026   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:01:59.716046   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:01:59.716054   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:01:59.716080   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:01:59.716129   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:01:59.716149   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:01:59.716156   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:01:59.716181   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:01:59.716234   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539-m03 san=[127.0.0.1 192.168.39.244 ha-959539-m03 localhost minikube]
	I0924 00:02:00.004700   26218 provision.go:177] copyRemoteCerts
	I0924 00:02:00.004758   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:02:00.004780   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.008103   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.008547   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.008578   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.008786   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.008992   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.009141   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.009273   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.090471   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:02:00.090557   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:02:00.113842   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:02:00.113915   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 00:02:00.136379   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:02:00.136447   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:02:00.158911   26218 provision.go:87] duration metric: took 449.525192ms to configureAuth
	I0924 00:02:00.158938   26218 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:02:00.159116   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:00.159181   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.161958   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.162260   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.162300   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.162497   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.162693   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.162991   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.163119   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.163316   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:02:00.163504   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:02:00.163521   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:02:00.384084   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:02:00.384116   26218 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:02:00.384137   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetURL
	I0924 00:02:00.385753   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using libvirt version 6000000
	I0924 00:02:00.388406   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.388802   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.388830   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.388972   26218 main.go:141] libmachine: Docker is up and running!
	I0924 00:02:00.389000   26218 main.go:141] libmachine: Reticulating splines...
	I0924 00:02:00.389008   26218 client.go:171] duration metric: took 29.320240775s to LocalClient.Create
	I0924 00:02:00.389034   26218 start.go:167] duration metric: took 29.320301121s to libmachine.API.Create "ha-959539"
	I0924 00:02:00.389045   26218 start.go:293] postStartSetup for "ha-959539-m03" (driver="kvm2")
	I0924 00:02:00.389059   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:02:00.389086   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.389316   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:02:00.389337   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.391543   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.391908   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.391935   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.392055   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.392242   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.392417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.392594   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.471592   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:02:00.475678   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:02:00.475711   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:02:00.475777   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:02:00.475847   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:02:00.475857   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:02:00.475939   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:02:00.485700   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:02:00.510312   26218 start.go:296] duration metric: took 121.25155ms for postStartSetup
	I0924 00:02:00.510378   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:02:00.511011   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:00.513590   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.513900   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.513916   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.514236   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:02:00.514445   26218 start.go:128] duration metric: took 29.464359711s to createHost
	I0924 00:02:00.514478   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.517098   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.517491   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.517528   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.517742   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.517933   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.518100   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.518211   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.518412   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:02:00.518622   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:02:00.518636   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:02:00.621293   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136120.603612543
	
	I0924 00:02:00.621339   26218 fix.go:216] guest clock: 1727136120.603612543
	I0924 00:02:00.621351   26218 fix.go:229] Guest: 2024-09-24 00:02:00.603612543 +0000 UTC Remote: 2024-09-24 00:02:00.514464327 +0000 UTC m=+153.742409876 (delta=89.148216ms)
	I0924 00:02:00.621377   26218 fix.go:200] guest clock delta is within tolerance: 89.148216ms
	I0924 00:02:00.621387   26218 start.go:83] releasing machines lock for "ha-959539-m03", held for 29.571423777s
	I0924 00:02:00.621417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.621673   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:00.624743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.625239   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.625273   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.627860   26218 out.go:177] * Found network options:
	I0924 00:02:00.629759   26218 out.go:177]   - NO_PROXY=192.168.39.231,192.168.39.71
	W0924 00:02:00.631173   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 00:02:00.631197   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:02:00.631218   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.631908   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.632117   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.632197   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:02:00.632234   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	W0924 00:02:00.632352   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 00:02:00.632378   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:02:00.632447   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:02:00.632470   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.635213   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635463   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635655   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.635679   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635817   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.635945   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.635972   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635973   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.636112   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.636177   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.636243   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.636375   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.636384   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.636482   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.872674   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:02:00.879244   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:02:00.879303   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:02:00.896008   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:02:00.896041   26218 start.go:495] detecting cgroup driver to use...
	I0924 00:02:00.896119   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:02:00.912126   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:02:00.928181   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:02:00.928242   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:02:00.942640   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:02:00.957462   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:02:01.095902   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:02:01.244902   26218 docker.go:233] disabling docker service ...
	I0924 00:02:01.244972   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:02:01.260549   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:02:01.273803   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:02:01.412634   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:02:01.527287   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:02:01.541205   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:02:01.559624   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:02:01.559693   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.569832   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:02:01.569892   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.580172   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.590239   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.600013   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:02:01.610683   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.622051   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.639348   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.649043   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:02:01.659584   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:02:01.659633   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:02:01.673533   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:02:01.683341   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:01.799476   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:02:01.894369   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:02:01.894448   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:02:01.898980   26218 start.go:563] Will wait 60s for crictl version
	I0924 00:02:01.899028   26218 ssh_runner.go:195] Run: which crictl
	I0924 00:02:01.902610   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:02:01.942080   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:02:01.942167   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:02:01.973094   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:02:02.006636   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:02:02.008088   26218 out.go:177]   - env NO_PROXY=192.168.39.231
	I0924 00:02:02.009670   26218 out.go:177]   - env NO_PROXY=192.168.39.231,192.168.39.71
	I0924 00:02:02.011150   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:02.014303   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:02.014787   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:02.014816   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:02.015031   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:02:02.019245   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:02:02.031619   26218 mustload.go:65] Loading cluster: ha-959539
	I0924 00:02:02.031867   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:02.032216   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:02.032262   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:02.047774   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0924 00:02:02.048245   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:02.048817   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:02.048840   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:02.049178   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:02.049404   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:02:02.051028   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:02:02.051346   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:02.051384   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:02.067177   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0924 00:02:02.067626   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:02.068120   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:02.068147   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:02.068561   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:02.068767   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:02:02.069023   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.244
	I0924 00:02:02.069035   26218 certs.go:194] generating shared ca certs ...
	I0924 00:02:02.069051   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.069225   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:02:02.069324   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:02:02.069337   26218 certs.go:256] generating profile certs ...
	I0924 00:02:02.069432   26218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:02:02.069461   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e
	I0924 00:02:02.069482   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.244 192.168.39.254]
	I0924 00:02:02.200792   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e ...
	I0924 00:02:02.200824   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e: {Name:mk0815e5ce107bafe277776d87408434b1fc0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.200990   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e ...
	I0924 00:02:02.201002   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e: {Name:mk2b87933cd0413159c4371c2a1af112dc0ae1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.201076   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:02:02.201200   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:02:02.201326   26218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:02:02.201341   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:02:02.201362   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:02:02.201373   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:02:02.201386   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:02:02.201398   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:02:02.201412   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:02:02.201424   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:02:02.216460   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:02:02.216561   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:02:02.216595   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:02:02.216607   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:02:02.216644   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:02:02.216668   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:02:02.216690   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:02:02.216728   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:02:02.216755   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.216774   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.216787   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.216818   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:02:02.220023   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:02.220522   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:02:02.220546   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:02.220674   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:02:02.220912   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:02:02.221115   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:02:02.221280   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:02:02.300781   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 00:02:02.306919   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 00:02:02.318700   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 00:02:02.322783   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0924 00:02:02.333789   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 00:02:02.337697   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 00:02:02.347574   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 00:02:02.351556   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 00:02:02.362821   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 00:02:02.367302   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 00:02:02.379143   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 00:02:02.383718   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 00:02:02.395777   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:02:02.422519   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:02:02.448222   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:02:02.473922   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:02:02.496975   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0924 00:02:02.519778   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:02:02.544839   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:02:02.567771   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:02:02.594776   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:02:02.622998   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:02:02.646945   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:02:02.670094   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 00:02:02.688636   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0924 00:02:02.706041   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 00:02:02.723591   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 00:02:02.740289   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 00:02:02.757088   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 00:02:02.774564   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 00:02:02.791730   26218 ssh_runner.go:195] Run: openssl version
	I0924 00:02:02.797731   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:02:02.810316   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.815033   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.815102   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.820784   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:02:02.831910   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:02:02.842883   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.847291   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.847354   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.852958   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:02:02.863626   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:02:02.874113   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.878537   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.878606   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.884346   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:02:02.896403   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:02:02.900556   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:02:02.900623   26218 kubeadm.go:934] updating node {m03 192.168.39.244 8443 v1.31.1 crio true true} ...
	I0924 00:02:02.900726   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:02:02.900760   26218 kube-vip.go:115] generating kube-vip config ...
	I0924 00:02:02.900809   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:02:02.915515   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:02:02.915610   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:02:02.915676   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:02:02.926273   26218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 00:02:02.926342   26218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 00:02:02.935889   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0924 00:02:02.935892   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0924 00:02:02.935939   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 00:02:02.935957   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:02:02.935965   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:02:02.935958   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:02:02.936030   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:02:02.936043   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:02:02.951235   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:02:02.951306   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 00:02:02.951337   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 00:02:02.951357   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:02:02.951363   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 00:02:02.951385   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 00:02:02.982567   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 00:02:02.982613   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 00:02:03.832975   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 00:02:03.844045   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 00:02:03.862702   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:02:03.880776   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:02:03.898729   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:02:03.902596   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:02:03.914924   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:04.053085   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:02:04.070074   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:02:04.070579   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:04.070643   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:04.087474   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0924 00:02:04.087999   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:04.088599   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:04.088620   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:04.089029   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:04.089257   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:02:04.089416   26218 start.go:317] joinCluster: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:02:04.089542   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 00:02:04.089559   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:02:04.092876   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:04.093495   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:02:04.093522   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:04.093697   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:02:04.093959   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:02:04.094120   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:02:04.094269   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:02:04.268135   26218 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:02:04.268198   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4ctl0.w5qwixeo1tvb3095 --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443"
	I0924 00:02:27.863528   26218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4ctl0.w5qwixeo1tvb3095 --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443": (23.595296768s)
	I0924 00:02:27.863572   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 00:02:28.487060   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539-m03 minikube.k8s.io/updated_at=2024_09_24T00_02_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=false
	I0924 00:02:28.628940   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-959539-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 00:02:28.748648   26218 start.go:319] duration metric: took 24.659226615s to joinCluster
	I0924 00:02:28.748728   26218 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:02:28.749108   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:28.750104   26218 out.go:177] * Verifying Kubernetes components...
	I0924 00:02:28.751646   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:29.019967   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:02:29.061460   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:02:29.061682   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 00:02:29.061736   26218 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.231:8443
	I0924 00:02:29.061979   26218 node_ready.go:35] waiting up to 6m0s for node "ha-959539-m03" to be "Ready" ...
	I0924 00:02:29.062051   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:29.062060   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:29.062068   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:29.062074   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:29.066072   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:29.562533   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:29.562554   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:29.562560   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:29.562570   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:29.567739   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:30.062212   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:30.062237   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:30.062245   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:30.062250   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:30.065711   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:30.562367   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:30.562402   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:30.562414   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:30.562419   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:30.565510   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:31.062523   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:31.062552   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:31.062564   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:31.062571   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:31.066499   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:31.067388   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:31.562731   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:31.562756   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:31.562771   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:31.562776   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:31.566512   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:32.062420   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:32.062441   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:32.062449   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:32.062454   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:32.065609   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:32.563014   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:32.563034   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:32.563042   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:32.563047   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:32.566443   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:33.062951   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:33.062980   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:33.062991   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:33.062996   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:33.067213   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:33.067831   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:33.562180   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:33.562210   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:33.562222   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:33.562229   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:33.565119   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:34.062360   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:34.062379   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:34.062387   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:34.062394   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:34.065867   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:34.562470   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:34.562494   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:34.562503   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:34.562508   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:34.566075   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:35.063097   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:35.063122   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:35.063133   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:35.063139   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:35.067536   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:35.068167   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:35.563171   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:35.563192   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:35.563200   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:35.563204   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:35.566347   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:36.062231   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:36.062252   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:36.062259   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:36.062263   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:36.068635   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:36.562318   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:36.562352   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:36.562360   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:36.562366   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:36.565945   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.062441   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:37.062465   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:37.062473   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:37.062477   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:37.065788   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.562611   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:37.562633   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:37.562641   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:37.562646   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:37.565850   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.566272   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:38.062661   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:38.062683   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:38.062691   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:38.062696   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:38.066483   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:38.562638   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:38.562660   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:38.562667   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:38.562671   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:38.566169   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.062729   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:39.062750   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:39.062759   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:39.062763   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:39.066557   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.562877   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:39.562899   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:39.562907   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:39.562912   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:39.566233   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.566763   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:40.063206   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:40.063226   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:40.063234   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:40.063239   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:40.066817   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:40.562132   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:40.562155   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:40.562165   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:40.562173   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:40.565811   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:41.062663   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:41.062683   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:41.062692   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:41.062696   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:41.066042   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:41.563040   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:41.563066   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:41.563078   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:41.563084   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:41.566187   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:42.063050   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:42.063071   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:42.063079   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:42.063082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:42.066449   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:42.067262   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:42.563040   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:42.563066   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:42.563077   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:42.563082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:42.566476   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:43.062431   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:43.062452   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:43.062458   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:43.062461   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:43.065607   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:43.563123   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:43.563144   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:43.563152   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:43.563155   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:43.566312   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.062448   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:44.062472   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:44.062480   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:44.062484   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:44.065777   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.562484   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:44.562506   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:44.562518   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:44.562527   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:44.565803   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.566407   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:45.062747   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.062780   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.062787   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.062792   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.066101   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.562696   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.562717   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.562726   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.562732   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.566877   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:45.567306   26218 node_ready.go:49] node "ha-959539-m03" has status "Ready":"True"
	I0924 00:02:45.567324   26218 node_ready.go:38] duration metric: took 16.505330859s for node "ha-959539-m03" to be "Ready" ...
	I0924 00:02:45.567334   26218 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:02:45.567399   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:45.567411   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.567421   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.567435   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.576236   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:02:45.582315   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.582415   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nkbzw
	I0924 00:02:45.582426   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.582437   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.582444   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.586563   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:45.587529   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.587551   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.587561   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.587566   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.590549   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.591073   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.591094   26218 pod_ready.go:82] duration metric: took 8.751789ms for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.591106   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.591177   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ss8lg
	I0924 00:02:45.591186   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.591196   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.591204   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.594507   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.595092   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.595107   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.595115   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.595119   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.597906   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.598405   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.598421   26218 pod_ready.go:82] duration metric: took 7.307084ms for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.598432   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.598497   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539
	I0924 00:02:45.598508   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.598517   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.598534   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.601102   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.601629   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.601643   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.601652   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.601657   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.604411   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.604921   26218 pod_ready.go:93] pod "etcd-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.604936   26218 pod_ready.go:82] duration metric: took 6.498124ms for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.604943   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.604986   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m02
	I0924 00:02:45.604994   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.605000   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.605003   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.607711   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.608182   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:45.608195   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.608202   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.608205   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.611102   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.611468   26218 pod_ready.go:93] pod "etcd-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.611482   26218 pod_ready.go:82] duration metric: took 6.534228ms for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.611489   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.762986   26218 request.go:632] Waited for 151.426917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m03
	I0924 00:02:45.763060   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m03
	I0924 00:02:45.763072   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.763082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.763093   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.768790   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:45.963102   26218 request.go:632] Waited for 193.344337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.963164   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.963169   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.963175   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.963178   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.966765   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.967332   26218 pod_ready.go:93] pod "etcd-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.967348   26218 pod_ready.go:82] duration metric: took 355.853201ms for pod "etcd-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.967370   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.162735   26218 request.go:632] Waited for 195.29099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:02:46.162798   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:02:46.162806   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.162816   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.162825   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.166290   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.363412   26218 request.go:632] Waited for 196.338649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:46.363479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:46.363488   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.363500   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.363522   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.368828   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:46.369452   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:46.369475   26218 pod_ready.go:82] duration metric: took 402.09395ms for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.369488   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.563510   26218 request.go:632] Waited for 193.954572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:02:46.563593   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:02:46.563601   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.563612   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.563620   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.567229   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.763581   26218 request.go:632] Waited for 195.391711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:46.763651   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:46.763658   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.763669   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.763676   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.766915   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.767439   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:46.767461   26218 pod_ready.go:82] duration metric: took 397.964383ms for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.767475   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.963610   26218 request.go:632] Waited for 196.063114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m03
	I0924 00:02:46.963694   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m03
	I0924 00:02:46.963703   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.963712   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.963719   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.967275   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.162752   26218 request.go:632] Waited for 194.876064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:47.162830   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:47.162838   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.162844   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.162847   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.166156   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.166699   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.166716   26218 pod_ready.go:82] duration metric: took 399.234813ms for pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.166725   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.362729   26218 request.go:632] Waited for 195.941337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:02:47.362789   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:02:47.362795   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.362802   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.362806   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.365942   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.562904   26218 request.go:632] Waited for 196.303098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:47.562966   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:47.562973   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.562982   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.562987   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.566192   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.566827   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.566845   26218 pod_ready.go:82] duration metric: took 400.114045ms for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.566855   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.762958   26218 request.go:632] Waited for 196.048732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:02:47.763034   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:02:47.763042   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.763049   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.763058   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.766336   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.963363   26218 request.go:632] Waited for 196.287822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:47.963455   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:47.963462   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.963470   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.963474   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.967146   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.967827   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.967850   26218 pod_ready.go:82] duration metric: took 400.989142ms for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.967860   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.162800   26218 request.go:632] Waited for 194.858732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m03
	I0924 00:02:48.162862   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m03
	I0924 00:02:48.162869   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.162880   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.162886   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.166955   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:48.362915   26218 request.go:632] Waited for 195.291486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:48.363004   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:48.363015   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.363023   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.363027   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.366536   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:48.367263   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:48.367282   26218 pod_ready.go:82] duration metric: took 399.415546ms for pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.367292   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.563765   26218 request.go:632] Waited for 196.416841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:02:48.563839   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:02:48.563844   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.563852   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.563858   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.567525   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:48.763756   26218 request.go:632] Waited for 195.286657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:48.763808   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:48.763813   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.763823   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.763827   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.768008   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:48.768461   26218 pod_ready.go:93] pod "kube-proxy-2hlqx" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:48.768523   26218 pod_ready.go:82] duration metric: took 401.181266ms for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.768542   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b82ch" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.963586   26218 request.go:632] Waited for 194.968745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b82ch
	I0924 00:02:48.963672   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b82ch
	I0924 00:02:48.963682   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.963698   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.963706   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.967156   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.163098   26218 request.go:632] Waited for 195.427645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:49.163160   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:49.163165   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.163172   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.163175   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.168664   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:49.169191   26218 pod_ready.go:93] pod "kube-proxy-b82ch" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.169212   26218 pod_ready.go:82] duration metric: took 400.661599ms for pod "kube-proxy-b82ch" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.169224   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.363274   26218 request.go:632] Waited for 193.975466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:02:49.363332   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:02:49.363337   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.363345   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.363348   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.367061   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.563180   26218 request.go:632] Waited for 195.372048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.563241   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.563246   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.563253   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.563260   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.566761   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.567465   26218 pod_ready.go:93] pod "kube-proxy-qzklc" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.567481   26218 pod_ready.go:82] duration metric: took 398.249897ms for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.567490   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.763615   26218 request.go:632] Waited for 196.0486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:02:49.763668   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:02:49.763673   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.763681   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.763685   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.767108   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.963188   26218 request.go:632] Waited for 195.362713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.963255   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.963261   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.963268   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.963273   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.966872   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.967707   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.967726   26218 pod_ready.go:82] duration metric: took 400.230299ms for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.967774   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.163358   26218 request.go:632] Waited for 195.519311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:02:50.163411   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:02:50.163416   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.163424   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.163428   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.167399   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.363362   26218 request.go:632] Waited for 195.429658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:50.363431   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:50.363438   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.363448   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.363453   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.366812   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.367292   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:50.367315   26218 pod_ready.go:82] duration metric: took 399.528577ms for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.367328   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.563431   26218 request.go:632] Waited for 196.035117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m03
	I0924 00:02:50.563517   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m03
	I0924 00:02:50.563525   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.563533   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.563536   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.567039   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.763077   26218 request.go:632] Waited for 195.355137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:50.763142   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:50.763148   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.763155   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.763160   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.766779   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.767385   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:50.767402   26218 pod_ready.go:82] duration metric: took 400.066903ms for pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.767413   26218 pod_ready.go:39] duration metric: took 5.200066315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:02:50.767425   26218 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:02:50.767482   26218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:02:50.783606   26218 api_server.go:72] duration metric: took 22.034845457s to wait for apiserver process to appear ...
	I0924 00:02:50.783631   26218 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:02:50.783650   26218 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0924 00:02:50.788103   26218 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0924 00:02:50.788220   26218 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I0924 00:02:50.788231   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.788241   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.788247   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.789134   26218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 00:02:50.789199   26218 api_server.go:141] control plane version: v1.31.1
	I0924 00:02:50.789217   26218 api_server.go:131] duration metric: took 5.578933ms to wait for apiserver health ...
	I0924 00:02:50.789227   26218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:02:50.963536   26218 request.go:632] Waited for 174.232731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:50.963617   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:50.963624   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.963635   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.963649   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.969906   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:50.976880   26218 system_pods.go:59] 24 kube-system pods found
	I0924 00:02:50.976914   26218 system_pods.go:61] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:02:50.976919   26218 system_pods.go:61] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:02:50.976923   26218 system_pods.go:61] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:02:50.976928   26218 system_pods.go:61] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:02:50.976933   26218 system_pods.go:61] "etcd-ha-959539-m03" [a71adb46-5bbc-43ce-8ef0-2b03bf75da03] Running
	I0924 00:02:50.976938   26218 system_pods.go:61] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:02:50.976943   26218 system_pods.go:61] "kindnet-g4nkw" [32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f] Running
	I0924 00:02:50.976948   26218 system_pods.go:61] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:02:50.976953   26218 system_pods.go:61] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:02:50.976958   26218 system_pods.go:61] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:02:50.976968   26218 system_pods.go:61] "kube-apiserver-ha-959539-m03" [7a54eb39-3ff9-4eb8-a5df-4333e1416899] Running
	I0924 00:02:50.976977   26218 system_pods.go:61] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:02:50.976985   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:02:50.976991   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m03" [bc95be18-c320-4981-8155-18432f08883e] Running
	I0924 00:02:50.976999   26218 system_pods.go:61] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:02:50.977007   26218 system_pods.go:61] "kube-proxy-b82ch" [5bf376fc-8dbe-4817-874c-506f5dc4d2e7] Running
	I0924 00:02:50.977015   26218 system_pods.go:61] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:02:50.977020   26218 system_pods.go:61] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:02:50.977027   26218 system_pods.go:61] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:02:50.977031   26218 system_pods.go:61] "kube-scheduler-ha-959539-m03" [e39eb1d7-90f3-4af9-9356-45ae9c23828d] Running
	I0924 00:02:50.977036   26218 system_pods.go:61] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:02:50.977044   26218 system_pods.go:61] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:02:50.977049   26218 system_pods.go:61] "kube-vip-ha-959539-m03" [3c5fd7f2-aec4-42d8-9331-ba59a4d76539] Running
	I0924 00:02:50.977058   26218 system_pods.go:61] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:02:50.977069   26218 system_pods.go:74] duration metric: took 187.832664ms to wait for pod list to return data ...
	I0924 00:02:50.977080   26218 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:02:51.162900   26218 request.go:632] Waited for 185.733558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:02:51.162976   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:02:51.162988   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.162995   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.163003   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.166765   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:51.166900   26218 default_sa.go:45] found service account: "default"
	I0924 00:02:51.166916   26218 default_sa.go:55] duration metric: took 189.8293ms for default service account to be created ...
	I0924 00:02:51.166927   26218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:02:51.363374   26218 request.go:632] Waited for 196.378603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:51.363436   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:51.363443   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.363453   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.363458   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.370348   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:51.376926   26218 system_pods.go:86] 24 kube-system pods found
	I0924 00:02:51.376957   26218 system_pods.go:89] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:02:51.376966   26218 system_pods.go:89] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:02:51.376972   26218 system_pods.go:89] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:02:51.376977   26218 system_pods.go:89] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:02:51.376984   26218 system_pods.go:89] "etcd-ha-959539-m03" [a71adb46-5bbc-43ce-8ef0-2b03bf75da03] Running
	I0924 00:02:51.376989   26218 system_pods.go:89] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:02:51.376994   26218 system_pods.go:89] "kindnet-g4nkw" [32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f] Running
	I0924 00:02:51.377000   26218 system_pods.go:89] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:02:51.377006   26218 system_pods.go:89] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:02:51.377012   26218 system_pods.go:89] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:02:51.377018   26218 system_pods.go:89] "kube-apiserver-ha-959539-m03" [7a54eb39-3ff9-4eb8-a5df-4333e1416899] Running
	I0924 00:02:51.377026   26218 system_pods.go:89] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:02:51.377036   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:02:51.377042   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m03" [bc95be18-c320-4981-8155-18432f08883e] Running
	I0924 00:02:51.377051   26218 system_pods.go:89] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:02:51.377057   26218 system_pods.go:89] "kube-proxy-b82ch" [5bf376fc-8dbe-4817-874c-506f5dc4d2e7] Running
	I0924 00:02:51.377066   26218 system_pods.go:89] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:02:51.377072   26218 system_pods.go:89] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:02:51.377080   26218 system_pods.go:89] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:02:51.377086   26218 system_pods.go:89] "kube-scheduler-ha-959539-m03" [e39eb1d7-90f3-4af9-9356-45ae9c23828d] Running
	I0924 00:02:51.377094   26218 system_pods.go:89] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:02:51.377100   26218 system_pods.go:89] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:02:51.377105   26218 system_pods.go:89] "kube-vip-ha-959539-m03" [3c5fd7f2-aec4-42d8-9331-ba59a4d76539] Running
	I0924 00:02:51.377111   26218 system_pods.go:89] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:02:51.377123   26218 system_pods.go:126] duration metric: took 210.186327ms to wait for k8s-apps to be running ...
	I0924 00:02:51.377134   26218 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:02:51.377189   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:02:51.392588   26218 system_svc.go:56] duration metric: took 15.444721ms WaitForService to wait for kubelet
	I0924 00:02:51.392618   26218 kubeadm.go:582] duration metric: took 22.64385975s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:02:51.392638   26218 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:02:51.563072   26218 request.go:632] Waited for 170.361096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I0924 00:02:51.563121   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I0924 00:02:51.563126   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.563134   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.563139   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.567517   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:51.569246   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569269   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569282   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569287   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569293   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569298   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569305   26218 node_conditions.go:105] duration metric: took 176.660035ms to run NodePressure ...
	I0924 00:02:51.569328   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:02:51.569355   26218 start.go:255] writing updated cluster config ...
	I0924 00:02:51.569656   26218 ssh_runner.go:195] Run: rm -f paused
	I0924 00:02:51.621645   26218 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 00:02:51.623613   26218 out.go:177] * Done! kubectl is now configured to use "ha-959539" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.757963977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136398757926600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af1df088-cc57-4aea-beba-b6f7bb0f0542 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.758512472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a07b9468-dae4-4c92-830d-6eba23a970bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.758565503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a07b9468-dae4-4c92-830d-6eba23a970bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.758785648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a07b9468-dae4-4c92-830d-6eba23a970bf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.794299784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e00f6f5-3909-4270-a762-8c0d53136cb1 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.794454286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e00f6f5-3909-4270-a762-8c0d53136cb1 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.795577201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b341581d-de0a-46bf-842e-c538f2569f49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.795973320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136398795951115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b341581d-de0a-46bf-842e-c538f2569f49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.796527980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d3e1af2-5064-40f0-8465-9bf739c7394b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.796577426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d3e1af2-5064-40f0-8465-9bf739c7394b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.797256666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d3e1af2-5064-40f0-8465-9bf739c7394b name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.836432848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1c9683d-8836-492b-97ba-e8fc503cf5fc name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.836507096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1c9683d-8836-492b-97ba-e8fc503cf5fc name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.837673519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9141d81e-ff45-4ecd-b139-27afc862a5f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.838453459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136398838426872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9141d81e-ff45-4ecd-b139-27afc862a5f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.838893142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47315ace-85cf-4edc-a723-b2b9dc1c671d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.838944950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47315ace-85cf-4edc-a723-b2b9dc1c671d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.839172789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47315ace-85cf-4edc-a723-b2b9dc1c671d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.874841228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e41bdece-1ebb-43f6-8dca-e5c5443fd065 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.874928016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e41bdece-1ebb-43f6-8dca-e5c5443fd065 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.875953247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f8e25bd-b435-4203-be11-683dad4394a2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.876501220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136398876476744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f8e25bd-b435-4203-be11-683dad4394a2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.877074401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=015f08f3-5461-4557-a681-0868b63cc0df name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.877126390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=015f08f3-5461-4557-a681-0868b63cc0df name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:38 ha-959539 crio[665]: time="2024-09-24 00:06:38.877461851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=015f08f3-5461-4557-a681-0868b63cc0df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ae8646f943f6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4b5dbf2a21893       busybox-7dff88458-7q7xr
	05d43a4d13300       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a91a16106518a       coredns-7c65d6cfc9-nkbzw
	e7a1a19a83d49       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   1a4ee0160fc1d       coredns-7c65d6cfc9-ss8lg
	2eb114bb7775d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2ffb51384d9a5       storage-provisioner
	1596300e66cf2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   1a380d0471083       kindnet-qlqss
	cdf912809c47a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   72ade1a051045       kube-proxy-qzklc
	b61587cd3ccea       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   f6a8ccad216f1       kube-vip-ha-959539
	d5459f3bc533d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   40d143641822b       etcd-ha-959539
	af224d12661c4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   7328f59cdb993       kube-scheduler-ha-959539
	a42356ed739fd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c7d97a67f80f6       kube-controller-manager-ha-959539
	8c911375acec9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   7cdc58cf999c2       kube-apiserver-ha-959539
	
	
	==> coredns [05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137] <==
	[INFO] 10.244.0.4:50134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005141674s
	[INFO] 10.244.1.2:43867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223991s
	[INFO] 10.244.1.2:35996 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000101615s
	[INFO] 10.244.2.2:54425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224645s
	[INFO] 10.244.2.2:58169 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.00170508s
	[INFO] 10.244.0.4:55776 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107033s
	[INFO] 10.244.0.4:58501 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017716872s
	[INFO] 10.244.0.4:37973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002021s
	[INFO] 10.244.0.4:43904 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156858s
	[INFO] 10.244.0.4:48352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163626s
	[INFO] 10.244.1.2:52896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132298s
	[INFO] 10.244.1.2:45449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227639s
	[INFO] 10.244.1.2:47616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017286s
	[INFO] 10.244.1.2:33521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108761s
	[INFO] 10.244.1.2:43587 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012987s
	[INFO] 10.244.2.2:52394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001362s
	[INFO] 10.244.2.2:43819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119859s
	[INFO] 10.244.2.2:35291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097457s
	[INFO] 10.244.2.2:56966 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168721s
	[INFO] 10.244.0.4:52779 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102739s
	[INFO] 10.244.2.2:59382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262295s
	[INFO] 10.244.2.2:44447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133384s
	[INFO] 10.244.2.2:52951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170462s
	[INFO] 10.244.2.2:46956 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215226s
	[INFO] 10.244.2.2:53703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108727s
	
	
	==> coredns [e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0] <==
	[INFO] 10.244.1.2:36104 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002245521s
	[INFO] 10.244.1.2:41962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001624615s
	[INFO] 10.244.1.2:36352 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142132s
	[INFO] 10.244.2.2:54238 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001909893s
	[INFO] 10.244.2.2:38238 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165226s
	[INFO] 10.244.2.2:40250 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173003s
	[INFO] 10.244.2.2:53405 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126728s
	[INFO] 10.244.0.4:46344 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157852s
	[INFO] 10.244.0.4:57359 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065958s
	[INFO] 10.244.0.4:43743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119977s
	[INFO] 10.244.1.2:32867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192169s
	[INFO] 10.244.1.2:43403 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167697s
	[INFO] 10.244.1.2:57243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095722s
	[INFO] 10.244.1.2:48326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119715s
	[INFO] 10.244.2.2:49664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122596s
	[INFO] 10.244.2.2:40943 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106169s
	[INFO] 10.244.0.4:36066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121758s
	[INFO] 10.244.0.4:51023 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156225s
	[INFO] 10.244.0.4:56715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125631s
	[INFO] 10.244.0.4:47944 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103261s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148466s
	[INFO] 10.244.1.2:54979 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116145s
	[INFO] 10.244.1.2:47442 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097064s
	[INFO] 10.244.1.2:38143 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188037s
	[INFO] 10.244.2.2:40107 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086602s
	
	
	==> describe nodes <==
	Name:               ha-959539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:00:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-959539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a4b9ce5eed94a13bdbc682549e1dd1e
	  System UUID:                0a4b9ce5-eed9-4a13-bdbc-682549e1dd1e
	  Boot ID:                    679e0a2b-8772-4f6d-9e47-ba8190727387
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7q7xr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-7c65d6cfc9-nkbzw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 coredns-7c65d6cfc9-ss8lg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 etcd-ha-959539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-qlqss                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m26s
	  kube-system                 kube-apiserver-ha-959539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-959539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-proxy-qzklc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-scheduler-ha-959539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-vip-ha-959539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m24s  kube-proxy       
	  Normal  RegisteredNode           6m27s  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal  Starting                 6m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-959539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-959539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-959539 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m14s  kubelet          Node ha-959539 status is now: NodeReady
	  Normal  RegisteredNode           5m26s  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal  RegisteredNode           4m6s   node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	
	
	Name:               ha-959539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:01:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:04:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    ha-959539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f78cfc70aad42d195f1884fe3a82e21
	  System UUID:                0f78cfc7-0aad-42d1-95f1-884fe3a82e21
	  Boot ID:                    247da00b-9587-4de7-aa45-9671f65dd14e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m5qhr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-959539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m34s
	  kube-system                 kindnet-cbrj7                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-959539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-controller-manager-ha-959539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-2hlqx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-959539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-vip-ha-959539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  Starting                 5m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m35s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m35s)  kubelet          Node ha-959539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m35s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-959539-m02 status is now: NodeNotReady
	
	
	Name:               ha-959539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_02_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-959539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e393f2c1cce4055aaf3b67371deff0b
	  System UUID:                7e393f2c-1cce-4055-aaf3-b67371deff0b
	  Boot ID:                    d3fa2681-c8c7-4049-92ed-f71eeaa56616
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9v6l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-959539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m12s
	  kube-system                 kindnet-g4nkw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m14s
	  kube-system                 kube-apiserver-ha-959539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-controller-manager-ha-959539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-b82ch                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-ha-959539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-vip-ha-959539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node ha-959539-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m14s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	
	
	Name:               ha-959539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_03_32_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:03:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-959539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55d6e549bf6d4455bd4db681e2cc17b8
	  System UUID:                55d6e549-bf6d-4455-bd4d-b681e2cc17b8
	  Boot ID:                    0f7b628e-f628-48c1-aab1-6401b3cfb87c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54xw8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-8h8qr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node ha-959539-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-959539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 23:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051430] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.729802] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.844348] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.545165] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.336873] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.055717] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062835] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.175047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141488] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281309] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.886660] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[Sep24 00:00] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.061155] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.064379] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.136832] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +2.892614] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.264409] kauditd_printk_skb: 15 callbacks suppressed
	[Sep24 00:01] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2] <==
	{"level":"warn","ts":"2024-09-24T00:06:38.907072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:38.946776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:38.973798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:38.975997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:38.980284Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.146876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.156190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.162615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.170434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.174721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.178090Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.184525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.190184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.196511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.201296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.204735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.211523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.217039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.223699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.227583Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.230510Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.234612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.240166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.246893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:39.249029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:06:39 up 7 min,  0 users,  load average: 0.50, 0.25, 0.11
	Linux ha-959539 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2] <==
	I0924 00:06:05.422531       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:15.413375       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:15.413431       1 main.go:299] handling current node
	I0924 00:06:15.413451       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:15.413457       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:15.413644       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:15.413665       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:15.413709       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:15.413714       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:25.420493       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:25.420595       1 main.go:299] handling current node
	I0924 00:06:25.420622       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:25.420640       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:25.420821       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:25.420897       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:25.420983       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:25.421005       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:35.421247       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:35.421291       1 main.go:299] handling current node
	I0924 00:06:35.421322       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:35.421373       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:35.421530       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:35.421553       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:35.421602       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:35.421608       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288] <==
	I0924 00:00:07.916652       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 00:00:12.613775       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 00:00:12.673306       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 00:00:12.714278       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 00:00:13.518109       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 00:00:13.589977       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 00:02:25.922866       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="9c890d06-5a2f-40bc-b52e-84153e1ff033"
	E0924 00:02:25.923053       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.218µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0924 00:02:25.923547       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 800.044µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0924 00:02:57.928651       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42468: use of closed network connection
	E0924 00:02:58.108585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42478: use of closed network connection
	E0924 00:02:58.286933       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42500: use of closed network connection
	E0924 00:02:58.488672       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42526: use of closed network connection
	E0924 00:02:58.667114       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42542: use of closed network connection
	E0924 00:02:58.850942       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42560: use of closed network connection
	E0924 00:02:59.040828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42576: use of closed network connection
	E0924 00:02:59.220980       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42590: use of closed network connection
	E0924 00:02:59.394600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42608: use of closed network connection
	E0924 00:02:59.676143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42636: use of closed network connection
	E0924 00:02:59.860764       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42646: use of closed network connection
	E0924 00:03:00.047956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42676: use of closed network connection
	E0924 00:03:00.214607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42700: use of closed network connection
	E0924 00:03:00.390729       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42708: use of closed network connection
	E0924 00:03:00.581800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42734: use of closed network connection
	W0924 00:04:17.715664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.244]
	
	
	==> kube-controller-manager [a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974] <==
	I0924 00:03:31.919493       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-959539-m04" podCIDRs=["10.244.3.0/24"]
	I0924 00:03:31.919545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:31.919581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:31.939956       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:32.140223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:32.547615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.004678       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-959539-m04"
	I0924 00:03:33.023454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.163542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.196770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.276017       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.293134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:42.271059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:52.595797       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	I0924 00:03:52.595900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:52.614607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:53.023412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:04:02.710901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:04:48.048138       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	I0924 00:04:48.048400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:48.078576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:48.166696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.971716ms"
	I0924 00:04:48.166889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.521µs"
	I0924 00:04:48.406838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:53.246642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	
	
	==> kube-proxy [cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:00:14.873543       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:00:14.915849       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	E0924 00:00:14.916021       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:00:14.966031       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:00:14.966075       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:00:14.966099       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:00:14.979823       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:00:14.980813       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:00:14.980842       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:00:14.989078       1 config.go:199] "Starting service config controller"
	I0924 00:00:14.990228       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:00:14.990251       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:00:14.993409       1 config.go:328] "Starting node config controller"
	I0924 00:00:14.993460       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:00:14.993657       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:00:15.090975       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:00:15.094378       1 shared_informer.go:320] Caches are synced for node config
	I0924 00:00:15.094379       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd] <==
	E0924 00:00:07.294311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:00:07.525201       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:00:07.525260       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 00:00:10.263814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 00:02:25.214912       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-g4nkw\": pod kindnet-g4nkw is already assigned to node \"ha-959539-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-g4nkw" node="ha-959539-m03"
	E0924 00:02:25.215083       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-g4nkw\": pod kindnet-g4nkw is already assigned to node \"ha-959539-m03\"" pod="kube-system/kindnet-g4nkw"
	E0924 00:02:25.219021       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-b82ch\": pod kube-proxy-b82ch is already assigned to node \"ha-959539-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-b82ch" node="ha-959539-m03"
	E0924 00:02:25.222512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5bf376fc-8dbe-4817-874c-506f5dc4d2e7(kube-system/kube-proxy-b82ch) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-b82ch"
	E0924 00:02:25.222635       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-b82ch\": pod kube-proxy-b82ch is already assigned to node \"ha-959539-m03\"" pod="kube-system/kube-proxy-b82ch"
	I0924 00:02:25.222722       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-b82ch" node="ha-959539-m03"
	E0924 00:02:26.361885       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f(kube-system/kindnet-g4nkw) is in the cache, so can't be assumed" pod="kube-system/kindnet-g4nkw"
	E0924 00:02:26.362043       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f(kube-system/kindnet-g4nkw) is in the cache, so can't be assumed" pod="kube-system/kindnet-g4nkw"
	I0924 00:02:26.362147       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-g4nkw" node="ha-959539-m03"
	E0924 00:02:52.586244       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-m5qhr\": pod busybox-7dff88458-m5qhr is already assigned to node \"ha-959539-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-m5qhr" node="ha-959539-m02"
	E0924 00:02:52.586487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-m5qhr\": pod busybox-7dff88458-m5qhr is already assigned to node \"ha-959539-m02\"" pod="default/busybox-7dff88458-m5qhr"
	E0924 00:02:52.609367       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7q7xr\": pod busybox-7dff88458-7q7xr is already assigned to node \"ha-959539\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7q7xr" node="ha-959539"
	E0924 00:02:52.609752       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a(default/busybox-7dff88458-7q7xr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-7q7xr"
	E0924 00:02:52.609813       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7q7xr\": pod busybox-7dff88458-7q7xr is already assigned to node \"ha-959539\"" pod="default/busybox-7dff88458-7q7xr"
	I0924 00:02:52.609856       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7q7xr" node="ha-959539"
	E0924 00:03:31.974702       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:31.975081       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9594238c-336e-479f-8424-bf5663475f7d(kube-system/kube-proxy-h87p2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h87p2"
	E0924 00:03:31.975198       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" pod="kube-system/kube-proxy-h87p2"
	I0924 00:03:31.975297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:32.025106       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zfglg" node="ha-959539-m04"
	E0924 00:03:32.025246       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" pod="kube-system/kindnet-zfglg"
	
	
	==> kubelet <==
	Sep 24 00:05:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:05:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:05:12 ha-959539 kubelet[1310]: E0924 00:05:12.631688    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136312631299697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:12 ha-959539 kubelet[1310]: E0924 00:05:12.631721    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136312631299697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:22 ha-959539 kubelet[1310]: E0924 00:05:22.633953    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136322633526599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:22 ha-959539 kubelet[1310]: E0924 00:05:22.634395    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136322633526599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:32 ha-959539 kubelet[1310]: E0924 00:05:32.636027    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136332635686531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:32 ha-959539 kubelet[1310]: E0924 00:05:32.636067    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136332635686531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:42 ha-959539 kubelet[1310]: E0924 00:05:42.638244    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136342637928063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:42 ha-959539 kubelet[1310]: E0924 00:05:42.638707    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136342637928063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:52 ha-959539 kubelet[1310]: E0924 00:05:52.640591    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136352640129305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:52 ha-959539 kubelet[1310]: E0924 00:05:52.640630    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136352640129305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:02 ha-959539 kubelet[1310]: E0924 00:06:02.642027    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136362641594633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:02 ha-959539 kubelet[1310]: E0924 00:06:02.642364    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136362641594633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.540506    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:06:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.644146    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136372643846607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.644181    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136372643846607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:22 ha-959539 kubelet[1310]: E0924 00:06:22.646770    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136382645975347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:22 ha-959539 kubelet[1310]: E0924 00:06:22.647251    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136382645975347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:32 ha-959539 kubelet[1310]: E0924 00:06:32.649495    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136392649118233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:32 ha-959539 kubelet[1310]: E0924 00:06:32.649564    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136392649118233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-959539 -n ha-959539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-959539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr: (4.034336046s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-959539 -n ha-959539
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 logs -n 25: (1.36064262s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m03_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m04 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp testdata/cp-test.txt                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m04_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03:/home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m03 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-959539 node stop m02 -v=7                                                     | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-959539 node start m02 -v=7                                                    | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:59:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:59:26.807239   26218 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:59:26.807515   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:59:26.807525   26218 out.go:358] Setting ErrFile to fd 2...
	I0923 23:59:26.807529   26218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:59:26.807708   26218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:59:26.808255   26218 out.go:352] Setting JSON to false
	I0923 23:59:26.809081   26218 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2511,"bootTime":1727133456,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:59:26.809190   26218 start.go:139] virtualization: kvm guest
	I0923 23:59:26.811490   26218 out.go:177] * [ha-959539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:59:26.813253   26218 notify.go:220] Checking for updates...
	I0923 23:59:26.813308   26218 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:59:26.814742   26218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:59:26.816098   26218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:59:26.817558   26218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:26.818772   26218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:59:26.819994   26218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:59:26.821406   26218 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:59:26.856627   26218 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 23:59:26.857800   26218 start.go:297] selected driver: kvm2
	I0923 23:59:26.857813   26218 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:59:26.857824   26218 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:59:26.858493   26218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:59:26.858582   26218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:59:26.873962   26218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:59:26.874005   26218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:59:26.874238   26218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:59:26.874272   26218 cni.go:84] Creating CNI manager for ""
	I0923 23:59:26.874317   26218 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 23:59:26.874326   26218 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 23:59:26.874369   26218 start.go:340] cluster config:
	{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 23:59:26.874490   26218 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:59:26.876392   26218 out.go:177] * Starting "ha-959539" primary control-plane node in "ha-959539" cluster
	I0923 23:59:26.877566   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:59:26.877605   26218 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:59:26.877627   26218 cache.go:56] Caching tarball of preloaded images
	I0923 23:59:26.877724   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 23:59:26.877737   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 23:59:26.878058   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0923 23:59:26.878079   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json: {Name:mkb5e645fc53383c85997a2cb75a196eaec42645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:26.878228   26218 start.go:360] acquireMachinesLock for ha-959539: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 23:59:26.878263   26218 start.go:364] duration metric: took 19.539µs to acquireMachinesLock for "ha-959539"
	I0923 23:59:26.878286   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 23:59:26.878346   26218 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 23:59:26.879811   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 23:59:26.879957   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:59:26.879996   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:59:26.894584   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0923 23:59:26.895047   26218 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:59:26.895660   26218 main.go:141] libmachine: Using API Version  1
	I0923 23:59:26.895681   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:59:26.896020   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:59:26.896226   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:26.896388   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:26.896534   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0923 23:59:26.896578   26218 client.go:168] LocalClient.Create starting
	I0923 23:59:26.896605   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0923 23:59:26.896637   26218 main.go:141] libmachine: Decoding PEM data...
	I0923 23:59:26.896658   26218 main.go:141] libmachine: Parsing certificate...
	I0923 23:59:26.896703   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0923 23:59:26.896727   26218 main.go:141] libmachine: Decoding PEM data...
	I0923 23:59:26.896739   26218 main.go:141] libmachine: Parsing certificate...
	I0923 23:59:26.896757   26218 main.go:141] libmachine: Running pre-create checks...
	I0923 23:59:26.896765   26218 main.go:141] libmachine: (ha-959539) Calling .PreCreateCheck
	I0923 23:59:26.897146   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:26.897553   26218 main.go:141] libmachine: Creating machine...
	I0923 23:59:26.897565   26218 main.go:141] libmachine: (ha-959539) Calling .Create
	I0923 23:59:26.897712   26218 main.go:141] libmachine: (ha-959539) Creating KVM machine...
	I0923 23:59:26.899261   26218 main.go:141] libmachine: (ha-959539) DBG | found existing default KVM network
	I0923 23:59:26.899973   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:26.899836   26241 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111e0}
	I0923 23:59:26.900022   26218 main.go:141] libmachine: (ha-959539) DBG | created network xml: 
	I0923 23:59:26.900042   26218 main.go:141] libmachine: (ha-959539) DBG | <network>
	I0923 23:59:26.900051   26218 main.go:141] libmachine: (ha-959539) DBG |   <name>mk-ha-959539</name>
	I0923 23:59:26.900066   26218 main.go:141] libmachine: (ha-959539) DBG |   <dns enable='no'/>
	I0923 23:59:26.900077   26218 main.go:141] libmachine: (ha-959539) DBG |   
	I0923 23:59:26.900085   26218 main.go:141] libmachine: (ha-959539) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 23:59:26.900097   26218 main.go:141] libmachine: (ha-959539) DBG |     <dhcp>
	I0923 23:59:26.900105   26218 main.go:141] libmachine: (ha-959539) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 23:59:26.900116   26218 main.go:141] libmachine: (ha-959539) DBG |     </dhcp>
	I0923 23:59:26.900122   26218 main.go:141] libmachine: (ha-959539) DBG |   </ip>
	I0923 23:59:26.900132   26218 main.go:141] libmachine: (ha-959539) DBG |   
	I0923 23:59:26.900140   26218 main.go:141] libmachine: (ha-959539) DBG | </network>
	I0923 23:59:26.900211   26218 main.go:141] libmachine: (ha-959539) DBG | 
	I0923 23:59:26.905213   26218 main.go:141] libmachine: (ha-959539) DBG | trying to create private KVM network mk-ha-959539 192.168.39.0/24...
	I0923 23:59:26.977916   26218 main.go:141] libmachine: (ha-959539) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 ...
	I0923 23:59:26.977955   26218 main.go:141] libmachine: (ha-959539) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0923 23:59:26.977972   26218 main.go:141] libmachine: (ha-959539) DBG | private KVM network mk-ha-959539 192.168.39.0/24 created
	I0923 23:59:26.977988   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:26.977847   26241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:26.978009   26218 main.go:141] libmachine: (ha-959539) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0923 23:59:27.232339   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.232194   26241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa...
	I0923 23:59:27.673404   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.673251   26241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/ha-959539.rawdisk...
	I0923 23:59:27.673433   26218 main.go:141] libmachine: (ha-959539) DBG | Writing magic tar header
	I0923 23:59:27.673445   26218 main.go:141] libmachine: (ha-959539) DBG | Writing SSH key tar header
	I0923 23:59:27.673465   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:27.673358   26241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 ...
	I0923 23:59:27.673485   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539 (perms=drwx------)
	I0923 23:59:27.673503   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539
	I0923 23:59:27.673514   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0923 23:59:27.673524   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0923 23:59:27.673532   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0923 23:59:27.673541   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 23:59:27.673551   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0923 23:59:27.673563   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:59:27.673577   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0923 23:59:27.673589   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 23:59:27.673598   26218 main.go:141] libmachine: (ha-959539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 23:59:27.673607   26218 main.go:141] libmachine: (ha-959539) Creating domain...
	I0923 23:59:27.673616   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home/jenkins
	I0923 23:59:27.673623   26218 main.go:141] libmachine: (ha-959539) DBG | Checking permissions on dir: /home
	I0923 23:59:27.673640   26218 main.go:141] libmachine: (ha-959539) DBG | Skipping /home - not owner
	I0923 23:59:27.674680   26218 main.go:141] libmachine: (ha-959539) define libvirt domain using xml: 
	I0923 23:59:27.674695   26218 main.go:141] libmachine: (ha-959539) <domain type='kvm'>
	I0923 23:59:27.674701   26218 main.go:141] libmachine: (ha-959539)   <name>ha-959539</name>
	I0923 23:59:27.674705   26218 main.go:141] libmachine: (ha-959539)   <memory unit='MiB'>2200</memory>
	I0923 23:59:27.674740   26218 main.go:141] libmachine: (ha-959539)   <vcpu>2</vcpu>
	I0923 23:59:27.674764   26218 main.go:141] libmachine: (ha-959539)   <features>
	I0923 23:59:27.674777   26218 main.go:141] libmachine: (ha-959539)     <acpi/>
	I0923 23:59:27.674788   26218 main.go:141] libmachine: (ha-959539)     <apic/>
	I0923 23:59:27.674801   26218 main.go:141] libmachine: (ha-959539)     <pae/>
	I0923 23:59:27.674828   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.674851   26218 main.go:141] libmachine: (ha-959539)   </features>
	I0923 23:59:27.674870   26218 main.go:141] libmachine: (ha-959539)   <cpu mode='host-passthrough'>
	I0923 23:59:27.674879   26218 main.go:141] libmachine: (ha-959539)   
	I0923 23:59:27.674889   26218 main.go:141] libmachine: (ha-959539)   </cpu>
	I0923 23:59:27.674905   26218 main.go:141] libmachine: (ha-959539)   <os>
	I0923 23:59:27.674917   26218 main.go:141] libmachine: (ha-959539)     <type>hvm</type>
	I0923 23:59:27.674943   26218 main.go:141] libmachine: (ha-959539)     <boot dev='cdrom'/>
	I0923 23:59:27.674960   26218 main.go:141] libmachine: (ha-959539)     <boot dev='hd'/>
	I0923 23:59:27.674974   26218 main.go:141] libmachine: (ha-959539)     <bootmenu enable='no'/>
	I0923 23:59:27.674985   26218 main.go:141] libmachine: (ha-959539)   </os>
	I0923 23:59:27.674997   26218 main.go:141] libmachine: (ha-959539)   <devices>
	I0923 23:59:27.675009   26218 main.go:141] libmachine: (ha-959539)     <disk type='file' device='cdrom'>
	I0923 23:59:27.675024   26218 main.go:141] libmachine: (ha-959539)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/boot2docker.iso'/>
	I0923 23:59:27.675037   26218 main.go:141] libmachine: (ha-959539)       <target dev='hdc' bus='scsi'/>
	I0923 23:59:27.675049   26218 main.go:141] libmachine: (ha-959539)       <readonly/>
	I0923 23:59:27.675060   26218 main.go:141] libmachine: (ha-959539)     </disk>
	I0923 23:59:27.675075   26218 main.go:141] libmachine: (ha-959539)     <disk type='file' device='disk'>
	I0923 23:59:27.675088   26218 main.go:141] libmachine: (ha-959539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 23:59:27.675111   26218 main.go:141] libmachine: (ha-959539)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/ha-959539.rawdisk'/>
	I0923 23:59:27.675127   26218 main.go:141] libmachine: (ha-959539)       <target dev='hda' bus='virtio'/>
	I0923 23:59:27.675141   26218 main.go:141] libmachine: (ha-959539)     </disk>
	I0923 23:59:27.675152   26218 main.go:141] libmachine: (ha-959539)     <interface type='network'>
	I0923 23:59:27.675165   26218 main.go:141] libmachine: (ha-959539)       <source network='mk-ha-959539'/>
	I0923 23:59:27.675175   26218 main.go:141] libmachine: (ha-959539)       <model type='virtio'/>
	I0923 23:59:27.675185   26218 main.go:141] libmachine: (ha-959539)     </interface>
	I0923 23:59:27.675192   26218 main.go:141] libmachine: (ha-959539)     <interface type='network'>
	I0923 23:59:27.675201   26218 main.go:141] libmachine: (ha-959539)       <source network='default'/>
	I0923 23:59:27.675206   26218 main.go:141] libmachine: (ha-959539)       <model type='virtio'/>
	I0923 23:59:27.675210   26218 main.go:141] libmachine: (ha-959539)     </interface>
	I0923 23:59:27.675217   26218 main.go:141] libmachine: (ha-959539)     <serial type='pty'>
	I0923 23:59:27.675222   26218 main.go:141] libmachine: (ha-959539)       <target port='0'/>
	I0923 23:59:27.675228   26218 main.go:141] libmachine: (ha-959539)     </serial>
	I0923 23:59:27.675247   26218 main.go:141] libmachine: (ha-959539)     <console type='pty'>
	I0923 23:59:27.675254   26218 main.go:141] libmachine: (ha-959539)       <target type='serial' port='0'/>
	I0923 23:59:27.675259   26218 main.go:141] libmachine: (ha-959539)     </console>
	I0923 23:59:27.675262   26218 main.go:141] libmachine: (ha-959539)     <rng model='virtio'>
	I0923 23:59:27.675273   26218 main.go:141] libmachine: (ha-959539)       <backend model='random'>/dev/random</backend>
	I0923 23:59:27.675279   26218 main.go:141] libmachine: (ha-959539)     </rng>
	I0923 23:59:27.675284   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.675289   26218 main.go:141] libmachine: (ha-959539)     
	I0923 23:59:27.675306   26218 main.go:141] libmachine: (ha-959539)   </devices>
	I0923 23:59:27.675324   26218 main.go:141] libmachine: (ha-959539) </domain>
	I0923 23:59:27.675341   26218 main.go:141] libmachine: (ha-959539) 
	I0923 23:59:27.679682   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:f8:7e:29 in network default
	I0923 23:59:27.680257   26218 main.go:141] libmachine: (ha-959539) Ensuring networks are active...
	I0923 23:59:27.680301   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:27.680992   26218 main.go:141] libmachine: (ha-959539) Ensuring network default is active
	I0923 23:59:27.681339   26218 main.go:141] libmachine: (ha-959539) Ensuring network mk-ha-959539 is active
	I0923 23:59:27.681827   26218 main.go:141] libmachine: (ha-959539) Getting domain xml...
	I0923 23:59:27.682529   26218 main.go:141] libmachine: (ha-959539) Creating domain...
	I0923 23:59:28.880638   26218 main.go:141] libmachine: (ha-959539) Waiting to get IP...
	I0923 23:59:28.881412   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:28.881793   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:28.881827   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:28.881764   26241 retry.go:31] will retry after 258.264646ms: waiting for machine to come up
	I0923 23:59:29.141441   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.141781   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.141818   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.141725   26241 retry.go:31] will retry after 275.827745ms: waiting for machine to come up
	I0923 23:59:29.419197   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.419582   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.419610   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.419535   26241 retry.go:31] will retry after 461.76652ms: waiting for machine to come up
	I0923 23:59:29.883216   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:29.883789   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:29.883811   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:29.883726   26241 retry.go:31] will retry after 445.570936ms: waiting for machine to come up
	I0923 23:59:30.331342   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:30.331760   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:30.331789   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:30.331719   26241 retry.go:31] will retry after 749.255419ms: waiting for machine to come up
	I0923 23:59:31.082478   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:31.082950   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:31.082971   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:31.082889   26241 retry.go:31] will retry after 773.348958ms: waiting for machine to come up
	I0923 23:59:31.857788   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:31.858274   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:31.858300   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:31.858204   26241 retry.go:31] will retry after 752.285326ms: waiting for machine to come up
	I0923 23:59:32.611583   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:32.612075   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:32.612098   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:32.612034   26241 retry.go:31] will retry after 1.137504115s: waiting for machine to come up
	I0923 23:59:33.751665   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:33.751976   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:33.752009   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:33.751932   26241 retry.go:31] will retry after 1.241947238s: waiting for machine to come up
	I0923 23:59:34.995017   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:34.995386   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:34.995400   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:34.995360   26241 retry.go:31] will retry after 1.449064591s: waiting for machine to come up
	I0923 23:59:36.446933   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:36.447337   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:36.447388   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:36.447302   26241 retry.go:31] will retry after 2.693587186s: waiting for machine to come up
	I0923 23:59:39.144265   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:39.144685   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:39.144701   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:39.144641   26241 retry.go:31] will retry after 2.637044367s: waiting for machine to come up
	I0923 23:59:41.785491   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:41.785902   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:41.785918   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:41.785859   26241 retry.go:31] will retry after 4.357362487s: waiting for machine to come up
	I0923 23:59:46.147970   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:46.148484   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find current IP address of domain ha-959539 in network mk-ha-959539
	I0923 23:59:46.148509   26218 main.go:141] libmachine: (ha-959539) DBG | I0923 23:59:46.148440   26241 retry.go:31] will retry after 4.358423196s: waiting for machine to come up
	I0923 23:59:50.510236   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.510860   26218 main.go:141] libmachine: (ha-959539) Found IP for machine: 192.168.39.231
	I0923 23:59:50.510881   26218 main.go:141] libmachine: (ha-959539) Reserving static IP address...
	I0923 23:59:50.510893   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has current primary IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.511347   26218 main.go:141] libmachine: (ha-959539) DBG | unable to find host DHCP lease matching {name: "ha-959539", mac: "52:54:00:99:17:69", ip: "192.168.39.231"} in network mk-ha-959539
	I0923 23:59:50.583983   26218 main.go:141] libmachine: (ha-959539) DBG | Getting to WaitForSSH function...
	I0923 23:59:50.584012   26218 main.go:141] libmachine: (ha-959539) Reserved static IP address: 192.168.39.231
	I0923 23:59:50.584024   26218 main.go:141] libmachine: (ha-959539) Waiting for SSH to be available...
	I0923 23:59:50.587176   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.587581   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.587613   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.587727   26218 main.go:141] libmachine: (ha-959539) DBG | Using SSH client type: external
	I0923 23:59:50.587740   26218 main.go:141] libmachine: (ha-959539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa (-rw-------)
	I0923 23:59:50.587808   26218 main.go:141] libmachine: (ha-959539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 23:59:50.587835   26218 main.go:141] libmachine: (ha-959539) DBG | About to run SSH command:
	I0923 23:59:50.587849   26218 main.go:141] libmachine: (ha-959539) DBG | exit 0
	I0923 23:59:50.716142   26218 main.go:141] libmachine: (ha-959539) DBG | SSH cmd err, output: <nil>: 
	I0923 23:59:50.716469   26218 main.go:141] libmachine: (ha-959539) KVM machine creation complete!
	I0923 23:59:50.716772   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:50.717437   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:50.717627   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:50.717783   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 23:59:50.717794   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0923 23:59:50.719003   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 23:59:50.719017   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 23:59:50.719040   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 23:59:50.719051   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.721609   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.721907   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.721928   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.722195   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.722412   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.722565   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.722658   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.722805   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.723011   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.723021   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 23:59:50.835498   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:59:50.835520   26218 main.go:141] libmachine: Detecting the provisioner...
	I0923 23:59:50.835527   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.838284   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.838621   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.838642   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.838906   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.839085   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.839257   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.839424   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.839565   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.839743   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.839754   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 23:59:50.953371   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 23:59:50.953486   26218 main.go:141] libmachine: found compatible host: buildroot
	I0923 23:59:50.953499   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0923 23:59:50.953509   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:50.953724   26218 buildroot.go:166] provisioning hostname "ha-959539"
	I0923 23:59:50.953757   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:50.953954   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:50.956724   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.957082   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:50.957105   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:50.957309   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:50.957497   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.957638   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:50.957763   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:50.957932   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:50.958118   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:50.958139   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539 && echo "ha-959539" | sudo tee /etc/hostname
	I0923 23:59:51.087322   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539
	
	I0923 23:59:51.087357   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.090134   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.090488   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.090514   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.090720   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.090906   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.091125   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.091383   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.091616   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.091783   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.091798   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 23:59:51.216710   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:59:51.216741   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0923 23:59:51.216763   26218 buildroot.go:174] setting up certificates
	I0923 23:59:51.216772   26218 provision.go:84] configureAuth start
	I0923 23:59:51.216781   26218 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0923 23:59:51.217050   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:51.219973   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.220311   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.220350   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.220472   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.223154   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.223541   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.223574   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.223732   26218 provision.go:143] copyHostCerts
	I0923 23:59:51.223760   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0923 23:59:51.223790   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0923 23:59:51.223807   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0923 23:59:51.223875   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0923 23:59:51.223951   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0923 23:59:51.223969   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0923 23:59:51.223976   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0923 23:59:51.223999   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0923 23:59:51.224038   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0923 23:59:51.224055   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0923 23:59:51.224060   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0923 23:59:51.224079   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0923 23:59:51.224140   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539 san=[127.0.0.1 192.168.39.231 ha-959539 localhost minikube]
	I0923 23:59:51.458115   26218 provision.go:177] copyRemoteCerts
	I0923 23:59:51.458172   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 23:59:51.458199   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.461001   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.461333   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.461358   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.461510   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.461701   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.461849   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.461970   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:51.550490   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 23:59:51.550562   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 23:59:51.574382   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 23:59:51.574471   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 23:59:51.597413   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 23:59:51.597507   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 23:59:51.620181   26218 provision.go:87] duration metric: took 403.395464ms to configureAuth
	I0923 23:59:51.620213   26218 buildroot.go:189] setting minikube options for container-runtime
	I0923 23:59:51.620452   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:59:51.620525   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.623330   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.623655   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.623683   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.623826   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.624031   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.624209   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.624360   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.624502   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.624659   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.624677   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 23:59:51.851847   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 23:59:51.851876   26218 main.go:141] libmachine: Checking connection to Docker...
	I0923 23:59:51.851883   26218 main.go:141] libmachine: (ha-959539) Calling .GetURL
	I0923 23:59:51.853119   26218 main.go:141] libmachine: (ha-959539) DBG | Using libvirt version 6000000
	I0923 23:59:51.855099   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.855420   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.855446   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.855586   26218 main.go:141] libmachine: Docker is up and running!
	I0923 23:59:51.855598   26218 main.go:141] libmachine: Reticulating splines...
	I0923 23:59:51.855605   26218 client.go:171] duration metric: took 24.959018357s to LocalClient.Create
	I0923 23:59:51.855625   26218 start.go:167] duration metric: took 24.959098074s to libmachine.API.Create "ha-959539"
	I0923 23:59:51.855634   26218 start.go:293] postStartSetup for "ha-959539" (driver="kvm2")
	I0923 23:59:51.855643   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:59:51.855656   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:51.855887   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:59:51.855913   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.858133   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.858438   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.858461   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.858627   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.858801   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.858953   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.859096   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:51.946855   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 23:59:51.950980   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 23:59:51.951009   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0923 23:59:51.951065   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0923 23:59:51.951158   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0923 23:59:51.951168   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0923 23:59:51.951319   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 23:59:51.960703   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0923 23:59:51.984127   26218 start.go:296] duration metric: took 128.479072ms for postStartSetup
	I0923 23:59:51.984203   26218 main.go:141] libmachine: (ha-959539) Calling .GetConfigRaw
	I0923 23:59:51.984890   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:51.987429   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.987719   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.987746   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.987964   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0923 23:59:51.988154   26218 start.go:128] duration metric: took 25.109799181s to createHost
	I0923 23:59:51.988175   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:51.990588   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.990906   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:51.990929   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:51.991056   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:51.991238   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.991353   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:51.991456   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:51.991563   26218 main.go:141] libmachine: Using SSH client type: native
	I0923 23:59:51.991778   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0923 23:59:51.991794   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 23:59:52.105105   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727135992.084651186
	
	I0923 23:59:52.105126   26218 fix.go:216] guest clock: 1727135992.084651186
	I0923 23:59:52.105133   26218 fix.go:229] Guest: 2024-09-23 23:59:52.084651186 +0000 UTC Remote: 2024-09-23 23:59:51.988165076 +0000 UTC m=+25.216110625 (delta=96.48611ms)
	I0923 23:59:52.105151   26218 fix.go:200] guest clock delta is within tolerance: 96.48611ms
	I0923 23:59:52.105156   26218 start.go:83] releasing machines lock for "ha-959539", held for 25.226882318s
	I0923 23:59:52.105171   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.105409   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:52.108347   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.108704   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.108728   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.108925   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109448   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109621   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0923 23:59:52.109725   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 23:59:52.109775   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:52.109834   26218 ssh_runner.go:195] Run: cat /version.json
	I0923 23:59:52.109859   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0923 23:59:52.112538   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112714   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112781   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.112818   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.112933   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:52.113055   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:52.113086   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:52.113164   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:52.113281   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0923 23:59:52.113341   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:52.113438   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0923 23:59:52.113503   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:52.113559   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0923 23:59:52.113735   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0923 23:59:52.193560   26218 ssh_runner.go:195] Run: systemctl --version
	I0923 23:59:52.235438   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 23:59:52.389606   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 23:59:52.396083   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 23:59:52.396147   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:59:52.413066   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 23:59:52.413095   26218 start.go:495] detecting cgroup driver to use...
	I0923 23:59:52.413158   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 23:59:52.429335   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 23:59:52.443813   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0923 23:59:52.443866   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 23:59:52.457675   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 23:59:52.471149   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 23:59:52.585355   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 23:59:52.737118   26218 docker.go:233] disabling docker service ...
	I0923 23:59:52.737174   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 23:59:52.752411   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 23:59:52.765194   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 23:59:52.901170   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 23:59:53.018250   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 23:59:53.031932   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:59:53.049015   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 23:59:53.049085   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.058948   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 23:59:53.059015   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.069147   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.079197   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.089022   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:59:53.100410   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.111370   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.128755   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 23:59:53.138944   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:59:53.149267   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 23:59:53.149363   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 23:59:53.163279   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:59:53.173965   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:59:53.305956   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 23:59:53.410170   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 23:59:53.410232   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 23:59:53.415034   26218 start.go:563] Will wait 60s for crictl version
	I0923 23:59:53.415112   26218 ssh_runner.go:195] Run: which crictl
	I0923 23:59:53.418927   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 23:59:53.464205   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0923 23:59:53.464285   26218 ssh_runner.go:195] Run: crio --version
	I0923 23:59:53.494495   26218 ssh_runner.go:195] Run: crio --version
	I0923 23:59:53.523488   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0923 23:59:53.524781   26218 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0923 23:59:53.527608   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:53.527945   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0923 23:59:53.527972   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0923 23:59:53.528223   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 23:59:53.532189   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:59:53.544235   26218 kubeadm.go:883] updating cluster {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:59:53.544347   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:59:53.544395   26218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:59:53.574815   26218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0923 23:59:53.574879   26218 ssh_runner.go:195] Run: which lz4
	I0923 23:59:53.578616   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 23:59:53.578693   26218 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 23:59:53.582683   26218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 23:59:53.582711   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0923 23:59:54.823072   26218 crio.go:462] duration metric: took 1.244398494s to copy over tarball
	I0923 23:59:54.823158   26218 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 23:59:56.834165   26218 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.010972437s)
	I0923 23:59:56.834200   26218 crio.go:469] duration metric: took 2.011094658s to extract the tarball
	I0923 23:59:56.834211   26218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 23:59:56.870476   26218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 23:59:56.915807   26218 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 23:59:56.915830   26218 cache_images.go:84] Images are preloaded, skipping loading
	I0923 23:59:56.915839   26218 kubeadm.go:934] updating node { 192.168.39.231 8443 v1.31.1 crio true true} ...
	I0923 23:59:56.915955   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 23:59:56.916032   26218 ssh_runner.go:195] Run: crio config
	I0923 23:59:56.959047   26218 cni.go:84] Creating CNI manager for ""
	I0923 23:59:56.959065   26218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 23:59:56.959075   26218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:59:56.959102   26218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-959539 NodeName:ha-959539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:59:56.959278   26218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-959539"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:59:56.959306   26218 kube-vip.go:115] generating kube-vip config ...
	I0923 23:59:56.959355   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 23:59:56.975413   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 23:59:56.975538   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 23:59:56.975609   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:59:56.985748   26218 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 23:59:56.985816   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 23:59:56.994858   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0923 23:59:57.011080   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:59:57.026929   26218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0923 23:59:57.042586   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 23:59:57.058931   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0923 23:59:57.062598   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:59:57.074372   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:59:57.199368   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:59:57.215790   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.231
	I0923 23:59:57.215808   26218 certs.go:194] generating shared ca certs ...
	I0923 23:59:57.215839   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.215971   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0923 23:59:57.216007   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0923 23:59:57.216016   26218 certs.go:256] generating profile certs ...
	I0923 23:59:57.216061   26218 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0923 23:59:57.216073   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt with IP's: []
	I0923 23:59:57.346653   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt ...
	I0923 23:59:57.346676   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt: {Name:mkab4515ea7168cda846b9bfb46262aeaac2bc0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.346833   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key ...
	I0923 23:59:57.346843   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key: {Name:mke7708261b70539d80260dff7c5f1bd958774aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.346914   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b
	I0923 23:59:57.346929   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.254]
	I0923 23:59:57.635327   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b ...
	I0923 23:59:57.635354   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b: {Name:mk5117d1a9a492c25c6b0e468e2bf78a6f60d1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.635505   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b ...
	I0923 23:59:57.635516   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b: {Name:mk3539984a0fdd5eeb79a51663bcd250a224ff95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.635580   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.34659c7b -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0923 23:59:57.635646   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.34659c7b -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0923 23:59:57.635698   26218 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0923 23:59:57.635711   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt with IP's: []
	I0923 23:59:57.894945   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt ...
	I0923 23:59:57.894975   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt: {Name:mkc0621f207c72302b780ca13cb5032341f4b069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.895138   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key ...
	I0923 23:59:57.895150   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key: {Name:mkf18d3b3341960faadac2faed03cef051112574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:59:57.895217   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 23:59:57.895235   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 23:59:57.895245   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 23:59:57.895265   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 23:59:57.895277   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 23:59:57.895287   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 23:59:57.895299   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 23:59:57.895310   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 23:59:57.895353   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0923 23:59:57.895393   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0923 23:59:57.895403   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 23:59:57.895425   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0923 23:59:57.895449   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:59:57.895469   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0923 23:59:57.895505   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0923 23:59:57.895531   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0923 23:59:57.895542   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0923 23:59:57.895555   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:57.896068   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:59:57.920516   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 23:59:57.944180   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:59:57.973439   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 23:59:58.001892   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 23:59:58.026752   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 23:59:58.049022   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:59:58.071861   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 23:59:58.094850   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0923 23:59:58.120029   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0923 23:59:58.144719   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:59:58.174622   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:59:58.192664   26218 ssh_runner.go:195] Run: openssl version
	I0923 23:59:58.198435   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0923 23:59:58.208675   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.212997   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.213048   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0923 23:59:58.218554   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0923 23:59:58.228984   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0923 23:59:58.239539   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.244140   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.244200   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0923 23:59:58.249770   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 23:59:58.260444   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:59:58.271376   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.276012   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.276066   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:59:58.281610   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:59:58.291931   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:59:58.295609   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:59:58.295656   26218 kubeadm.go:392] StartCluster: {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:59:58.295736   26218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 23:59:58.295803   26218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 23:59:58.331462   26218 cri.go:89] found id: ""
	I0923 23:59:58.331531   26218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:59:58.341582   26218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:59:58.351079   26218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:59:58.360870   26218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:59:58.360891   26218 kubeadm.go:157] found existing configuration files:
	
	I0923 23:59:58.360931   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:59:58.370007   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:59:58.370064   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:59:58.379658   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:59:58.388923   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:59:58.388982   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:59:58.398781   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:59:58.407722   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:59:58.407786   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:59:58.417271   26218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:59:58.426264   26218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:59:58.426322   26218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:59:58.435999   26218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 23:59:58.546770   26218 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:59:58.546896   26218 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:59:58.658868   26218 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:59:58.659029   26218 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:59:58.659118   26218 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:59:58.667816   26218 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:59:58.762200   26218 out.go:235]   - Generating certificates and keys ...
	I0923 23:59:58.762295   26218 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:59:58.762371   26218 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:59:58.762428   26218 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:59:58.931425   26218 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:59:59.169435   26218 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:59:59.368885   26218 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:59:59.910983   26218 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:59:59.911147   26218 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-959539 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0924 00:00:00.027247   26218 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 00:00:00.027385   26218 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-959539 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I0924 00:00:00.408901   26218 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 00:00:00.695628   26218 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 00:00:01.084765   26218 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 00:00:01.084831   26218 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:00:01.198400   26218 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:00:01.455815   26218 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 00:00:01.707214   26218 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:00:01.761069   26218 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:00:01.868085   26218 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:00:01.868536   26218 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:00:01.872192   26218 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:00:01.874381   26218 out.go:235]   - Booting up control plane ...
	I0924 00:00:01.874504   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:00:01.874578   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:00:01.874634   26218 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:00:01.890454   26218 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:00:01.897634   26218 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:00:01.897699   26218 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:00:02.038440   26218 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 00:00:02.038603   26218 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 00:00:02.541646   26218 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.471901ms
	I0924 00:00:02.541770   26218 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 00:00:11.738795   26218 kubeadm.go:310] [api-check] The API server is healthy after 9.198818169s
	I0924 00:00:11.752392   26218 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 00:00:11.768902   26218 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 00:00:11.811138   26218 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 00:00:11.811397   26218 kubeadm.go:310] [mark-control-plane] Marking the node ha-959539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 00:00:11.828918   26218 kubeadm.go:310] [bootstrap-token] Using token: a2tynl.1ohol4x4auhbv6gq
	I0924 00:00:11.830685   26218 out.go:235]   - Configuring RBAC rules ...
	I0924 00:00:11.830831   26218 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 00:00:11.844590   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 00:00:11.854514   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 00:00:11.858483   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 00:00:11.862691   26218 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 00:00:11.866723   26218 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 00:00:12.143692   26218 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 00:00:12.683818   26218 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 00:00:13.148491   26218 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 00:00:13.149475   26218 kubeadm.go:310] 
	I0924 00:00:13.149539   26218 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 00:00:13.149548   26218 kubeadm.go:310] 
	I0924 00:00:13.149650   26218 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 00:00:13.149658   26218 kubeadm.go:310] 
	I0924 00:00:13.149681   26218 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 00:00:13.149743   26218 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 00:00:13.149832   26218 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 00:00:13.149862   26218 kubeadm.go:310] 
	I0924 00:00:13.149949   26218 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 00:00:13.149959   26218 kubeadm.go:310] 
	I0924 00:00:13.150027   26218 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 00:00:13.150036   26218 kubeadm.go:310] 
	I0924 00:00:13.150112   26218 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 00:00:13.150219   26218 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 00:00:13.150313   26218 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 00:00:13.150324   26218 kubeadm.go:310] 
	I0924 00:00:13.150430   26218 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 00:00:13.150539   26218 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 00:00:13.150551   26218 kubeadm.go:310] 
	I0924 00:00:13.150661   26218 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a2tynl.1ohol4x4auhbv6gq \
	I0924 00:00:13.150808   26218 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 00:00:13.150846   26218 kubeadm.go:310] 	--control-plane 
	I0924 00:00:13.150856   26218 kubeadm.go:310] 
	I0924 00:00:13.150970   26218 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 00:00:13.150989   26218 kubeadm.go:310] 
	I0924 00:00:13.151100   26218 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a2tynl.1ohol4x4auhbv6gq \
	I0924 00:00:13.151239   26218 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 00:00:13.152162   26218 kubeadm.go:310] W0923 23:59:58.529397     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:00:13.152583   26218 kubeadm.go:310] W0923 23:59:58.530304     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:00:13.152731   26218 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:00:13.152765   26218 cni.go:84] Creating CNI manager for ""
	I0924 00:00:13.152776   26218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 00:00:13.154438   26218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 00:00:13.155646   26218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 00:00:13.161171   26218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 00:00:13.161193   26218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 00:00:13.184460   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 00:00:13.668553   26218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 00:00:13.668646   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:00:13.668716   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539 minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=true
	I0924 00:00:13.906100   26218 ops.go:34] apiserver oom_adj: -16
	I0924 00:00:13.906236   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:00:14.026723   26218 kubeadm.go:1113] duration metric: took 358.135167ms to wait for elevateKubeSystemPrivileges
	I0924 00:00:14.026757   26218 kubeadm.go:394] duration metric: took 15.731103406s to StartCluster
	I0924 00:00:14.026778   26218 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:14.026862   26218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:00:14.027452   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:14.027658   26218 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:14.027668   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 00:00:14.027688   26218 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 00:00:14.027758   26218 addons.go:69] Setting storage-provisioner=true in profile "ha-959539"
	I0924 00:00:14.027782   26218 addons.go:234] Setting addon storage-provisioner=true in "ha-959539"
	I0924 00:00:14.027808   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:14.027677   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:00:14.027850   26218 addons.go:69] Setting default-storageclass=true in profile "ha-959539"
	I0924 00:00:14.027872   26218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-959539"
	I0924 00:00:14.027940   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:14.028248   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.028262   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.028289   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.028388   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.043826   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0924 00:00:14.043826   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I0924 00:00:14.044412   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.044444   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.044897   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.044921   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.045026   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.045048   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.045272   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.045342   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.045440   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.045899   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.045941   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.047486   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:00:14.047712   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 00:00:14.048174   26218 cert_rotation.go:140] Starting client certificate rotation controller
	I0924 00:00:14.048284   26218 addons.go:234] Setting addon default-storageclass=true in "ha-959539"
	I0924 00:00:14.048319   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:14.048595   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.048634   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.062043   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0924 00:00:14.062493   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.063046   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.063070   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.063429   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.063717   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.064022   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0924 00:00:14.064526   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.064977   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.065001   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.065303   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.065800   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:14.065914   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:14.065960   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:14.067886   26218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:00:14.069203   26218 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:00:14.069223   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 00:00:14.069245   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:14.072558   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.072961   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:14.072982   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.073163   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:14.073338   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:14.073491   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:14.073620   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:14.082767   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0924 00:00:14.083265   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:14.083864   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:14.083889   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:14.084221   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:14.084481   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:14.086186   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:14.086413   26218 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 00:00:14.086430   26218 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 00:00:14.086447   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:14.089541   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.089980   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:14.090010   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:14.090151   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:14.090333   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:14.090551   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:14.090735   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:14.208938   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 00:00:14.243343   26218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:00:14.328202   26218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 00:00:14.719009   26218 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 00:00:15.026630   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.026666   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.026684   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.026706   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.026978   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027033   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027049   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.027059   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.027104   26218 main.go:141] libmachine: (ha-959539) DBG | Closing plugin on server side
	I0924 00:00:15.027152   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027174   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027183   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.027191   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.027272   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027294   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027390   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.027404   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.027434   26218 main.go:141] libmachine: (ha-959539) DBG | Closing plugin on server side
	I0924 00:00:15.027454   26218 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 00:00:15.027470   26218 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 00:00:15.027568   26218 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0924 00:00:15.027574   26218 round_trippers.go:469] Request Headers:
	I0924 00:00:15.027581   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:00:15.027585   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:00:15.042627   26218 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0924 00:00:15.043249   26218 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0924 00:00:15.043266   26218 round_trippers.go:469] Request Headers:
	I0924 00:00:15.043284   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:00:15.043295   26218 round_trippers.go:473]     Content-Type: application/json
	I0924 00:00:15.043300   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:00:15.047076   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:00:15.047250   26218 main.go:141] libmachine: Making call to close driver server
	I0924 00:00:15.047265   26218 main.go:141] libmachine: (ha-959539) Calling .Close
	I0924 00:00:15.047499   26218 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:00:15.047522   26218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:00:15.049462   26218 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 00:00:15.050768   26218 addons.go:510] duration metric: took 1.023080124s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 00:00:15.050804   26218 start.go:246] waiting for cluster config update ...
	I0924 00:00:15.050819   26218 start.go:255] writing updated cluster config ...
	I0924 00:00:15.052488   26218 out.go:201] 
	I0924 00:00:15.054069   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:15.054138   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:15.056020   26218 out.go:177] * Starting "ha-959539-m02" control-plane node in "ha-959539" cluster
	I0924 00:00:15.057275   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:00:15.057294   26218 cache.go:56] Caching tarball of preloaded images
	I0924 00:00:15.057386   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:00:15.057396   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:00:15.057456   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:15.057614   26218 start.go:360] acquireMachinesLock for ha-959539-m02: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:00:15.057654   26218 start.go:364] duration metric: took 22.109µs to acquireMachinesLock for "ha-959539-m02"
	I0924 00:00:15.057669   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:15.057726   26218 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0924 00:00:15.059302   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:00:15.059377   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:15.059408   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:15.074812   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0924 00:00:15.075196   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:15.075683   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:15.075703   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:15.076029   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:15.076222   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:15.076403   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:15.076562   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0924 00:00:15.076593   26218 client.go:168] LocalClient.Create starting
	I0924 00:00:15.076633   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:00:15.076673   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:00:15.076695   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:00:15.076755   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:00:15.076782   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:00:15.076796   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:00:15.076816   26218 main.go:141] libmachine: Running pre-create checks...
	I0924 00:00:15.076827   26218 main.go:141] libmachine: (ha-959539-m02) Calling .PreCreateCheck
	I0924 00:00:15.076957   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:15.077329   26218 main.go:141] libmachine: Creating machine...
	I0924 00:00:15.077346   26218 main.go:141] libmachine: (ha-959539-m02) Calling .Create
	I0924 00:00:15.077491   26218 main.go:141] libmachine: (ha-959539-m02) Creating KVM machine...
	I0924 00:00:15.078735   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found existing default KVM network
	I0924 00:00:15.078908   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found existing private KVM network mk-ha-959539
	I0924 00:00:15.079005   26218 main.go:141] libmachine: (ha-959539-m02) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 ...
	I0924 00:00:15.079050   26218 main.go:141] libmachine: (ha-959539-m02) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:00:15.079067   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.078949   26566 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:00:15.079117   26218 main.go:141] libmachine: (ha-959539-m02) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:00:15.323293   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.323139   26566 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa...
	I0924 00:00:15.574063   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.573935   26566 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/ha-959539-m02.rawdisk...
	I0924 00:00:15.574096   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Writing magic tar header
	I0924 00:00:15.574106   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Writing SSH key tar header
	I0924 00:00:15.574114   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:15.574047   26566 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 ...
	I0924 00:00:15.574234   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02 (perms=drwx------)
	I0924 00:00:15.574263   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:00:15.574274   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02
	I0924 00:00:15.574301   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:00:15.574318   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:00:15.574331   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:00:15.574341   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:00:15.574351   26218 main.go:141] libmachine: (ha-959539-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:00:15.574358   26218 main.go:141] libmachine: (ha-959539-m02) Creating domain...
	I0924 00:00:15.574368   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:00:15.574373   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:00:15.574383   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:00:15.574388   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:00:15.574397   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Checking permissions on dir: /home
	I0924 00:00:15.574402   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Skipping /home - not owner
	I0924 00:00:15.575397   26218 main.go:141] libmachine: (ha-959539-m02) define libvirt domain using xml: 
	I0924 00:00:15.575418   26218 main.go:141] libmachine: (ha-959539-m02) <domain type='kvm'>
	I0924 00:00:15.575426   26218 main.go:141] libmachine: (ha-959539-m02)   <name>ha-959539-m02</name>
	I0924 00:00:15.575433   26218 main.go:141] libmachine: (ha-959539-m02)   <memory unit='MiB'>2200</memory>
	I0924 00:00:15.575441   26218 main.go:141] libmachine: (ha-959539-m02)   <vcpu>2</vcpu>
	I0924 00:00:15.575446   26218 main.go:141] libmachine: (ha-959539-m02)   <features>
	I0924 00:00:15.575454   26218 main.go:141] libmachine: (ha-959539-m02)     <acpi/>
	I0924 00:00:15.575461   26218 main.go:141] libmachine: (ha-959539-m02)     <apic/>
	I0924 00:00:15.575476   26218 main.go:141] libmachine: (ha-959539-m02)     <pae/>
	I0924 00:00:15.575486   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575497   26218 main.go:141] libmachine: (ha-959539-m02)   </features>
	I0924 00:00:15.575507   26218 main.go:141] libmachine: (ha-959539-m02)   <cpu mode='host-passthrough'>
	I0924 00:00:15.575514   26218 main.go:141] libmachine: (ha-959539-m02)   
	I0924 00:00:15.575526   26218 main.go:141] libmachine: (ha-959539-m02)   </cpu>
	I0924 00:00:15.575536   26218 main.go:141] libmachine: (ha-959539-m02)   <os>
	I0924 00:00:15.575543   26218 main.go:141] libmachine: (ha-959539-m02)     <type>hvm</type>
	I0924 00:00:15.575556   26218 main.go:141] libmachine: (ha-959539-m02)     <boot dev='cdrom'/>
	I0924 00:00:15.575573   26218 main.go:141] libmachine: (ha-959539-m02)     <boot dev='hd'/>
	I0924 00:00:15.575585   26218 main.go:141] libmachine: (ha-959539-m02)     <bootmenu enable='no'/>
	I0924 00:00:15.575595   26218 main.go:141] libmachine: (ha-959539-m02)   </os>
	I0924 00:00:15.575608   26218 main.go:141] libmachine: (ha-959539-m02)   <devices>
	I0924 00:00:15.575620   26218 main.go:141] libmachine: (ha-959539-m02)     <disk type='file' device='cdrom'>
	I0924 00:00:15.575642   26218 main.go:141] libmachine: (ha-959539-m02)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/boot2docker.iso'/>
	I0924 00:00:15.575655   26218 main.go:141] libmachine: (ha-959539-m02)       <target dev='hdc' bus='scsi'/>
	I0924 00:00:15.575665   26218 main.go:141] libmachine: (ha-959539-m02)       <readonly/>
	I0924 00:00:15.575675   26218 main.go:141] libmachine: (ha-959539-m02)     </disk>
	I0924 00:00:15.575691   26218 main.go:141] libmachine: (ha-959539-m02)     <disk type='file' device='disk'>
	I0924 00:00:15.575706   26218 main.go:141] libmachine: (ha-959539-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:00:15.575717   26218 main.go:141] libmachine: (ha-959539-m02)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/ha-959539-m02.rawdisk'/>
	I0924 00:00:15.575725   26218 main.go:141] libmachine: (ha-959539-m02)       <target dev='hda' bus='virtio'/>
	I0924 00:00:15.575732   26218 main.go:141] libmachine: (ha-959539-m02)     </disk>
	I0924 00:00:15.575744   26218 main.go:141] libmachine: (ha-959539-m02)     <interface type='network'>
	I0924 00:00:15.575752   26218 main.go:141] libmachine: (ha-959539-m02)       <source network='mk-ha-959539'/>
	I0924 00:00:15.575780   26218 main.go:141] libmachine: (ha-959539-m02)       <model type='virtio'/>
	I0924 00:00:15.575803   26218 main.go:141] libmachine: (ha-959539-m02)     </interface>
	I0924 00:00:15.575828   26218 main.go:141] libmachine: (ha-959539-m02)     <interface type='network'>
	I0924 00:00:15.575848   26218 main.go:141] libmachine: (ha-959539-m02)       <source network='default'/>
	I0924 00:00:15.575861   26218 main.go:141] libmachine: (ha-959539-m02)       <model type='virtio'/>
	I0924 00:00:15.575871   26218 main.go:141] libmachine: (ha-959539-m02)     </interface>
	I0924 00:00:15.575880   26218 main.go:141] libmachine: (ha-959539-m02)     <serial type='pty'>
	I0924 00:00:15.575890   26218 main.go:141] libmachine: (ha-959539-m02)       <target port='0'/>
	I0924 00:00:15.575898   26218 main.go:141] libmachine: (ha-959539-m02)     </serial>
	I0924 00:00:15.575907   26218 main.go:141] libmachine: (ha-959539-m02)     <console type='pty'>
	I0924 00:00:15.575916   26218 main.go:141] libmachine: (ha-959539-m02)       <target type='serial' port='0'/>
	I0924 00:00:15.575929   26218 main.go:141] libmachine: (ha-959539-m02)     </console>
	I0924 00:00:15.575941   26218 main.go:141] libmachine: (ha-959539-m02)     <rng model='virtio'>
	I0924 00:00:15.575953   26218 main.go:141] libmachine: (ha-959539-m02)       <backend model='random'>/dev/random</backend>
	I0924 00:00:15.575961   26218 main.go:141] libmachine: (ha-959539-m02)     </rng>
	I0924 00:00:15.575970   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575977   26218 main.go:141] libmachine: (ha-959539-m02)     
	I0924 00:00:15.575986   26218 main.go:141] libmachine: (ha-959539-m02)   </devices>
	I0924 00:00:15.575994   26218 main.go:141] libmachine: (ha-959539-m02) </domain>
	I0924 00:00:15.576006   26218 main.go:141] libmachine: (ha-959539-m02) 
	I0924 00:00:15.585706   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:4f:cb:25 in network default
	I0924 00:00:15.586358   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring networks are active...
	I0924 00:00:15.586382   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:15.588682   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring network default is active
	I0924 00:00:15.589090   26218 main.go:141] libmachine: (ha-959539-m02) Ensuring network mk-ha-959539 is active
	I0924 00:00:15.589485   26218 main.go:141] libmachine: (ha-959539-m02) Getting domain xml...
	I0924 00:00:15.590356   26218 main.go:141] libmachine: (ha-959539-m02) Creating domain...
	I0924 00:00:16.876850   26218 main.go:141] libmachine: (ha-959539-m02) Waiting to get IP...
	I0924 00:00:16.877600   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:16.878025   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:16.878048   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:16.878002   26566 retry.go:31] will retry after 206.511357ms: waiting for machine to come up
	I0924 00:00:17.086726   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.087176   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.087210   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.087160   26566 retry.go:31] will retry after 339.485484ms: waiting for machine to come up
	I0924 00:00:17.428879   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.429496   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.429530   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.429442   26566 retry.go:31] will retry after 355.763587ms: waiting for machine to come up
	I0924 00:00:17.787147   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:17.787637   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:17.787665   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:17.787594   26566 retry.go:31] will retry after 608.491101ms: waiting for machine to come up
	I0924 00:00:18.397336   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:18.397814   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:18.397840   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:18.397785   26566 retry.go:31] will retry after 502.478814ms: waiting for machine to come up
	I0924 00:00:18.901642   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:18.902265   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:18.902291   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:18.902211   26566 retry.go:31] will retry after 818.203447ms: waiting for machine to come up
	I0924 00:00:19.722162   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:19.722608   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:19.722629   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:19.722558   26566 retry.go:31] will retry after 929.046384ms: waiting for machine to come up
	I0924 00:00:20.653489   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:20.653984   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:20.654008   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:20.653948   26566 retry.go:31] will retry after 1.409190678s: waiting for machine to come up
	I0924 00:00:22.065332   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:22.065896   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:22.065920   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:22.065833   26566 retry.go:31] will retry after 1.614499189s: waiting for machine to come up
	I0924 00:00:23.681862   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:23.682319   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:23.682363   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:23.682234   26566 retry.go:31] will retry after 1.460062243s: waiting for machine to come up
	I0924 00:00:25.144293   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:25.144745   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:25.144767   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:25.144697   26566 retry.go:31] will retry after 1.777929722s: waiting for machine to come up
	I0924 00:00:26.924735   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:26.925200   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:26.925237   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:26.925162   26566 retry.go:31] will retry after 3.141763872s: waiting for machine to come up
	I0924 00:00:30.069494   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:30.070014   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:30.070036   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:30.069955   26566 retry.go:31] will retry after 3.647403595s: waiting for machine to come up
	I0924 00:00:33.721303   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:33.721786   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find current IP address of domain ha-959539-m02 in network mk-ha-959539
	I0924 00:00:33.721804   26218 main.go:141] libmachine: (ha-959539-m02) DBG | I0924 00:00:33.721753   26566 retry.go:31] will retry after 4.027076232s: waiting for machine to come up
	I0924 00:00:37.752592   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.753064   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has current primary IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.753095   26218 main.go:141] libmachine: (ha-959539-m02) Found IP for machine: 192.168.39.71
	I0924 00:00:37.753104   26218 main.go:141] libmachine: (ha-959539-m02) Reserving static IP address...
	I0924 00:00:37.753574   26218 main.go:141] libmachine: (ha-959539-m02) DBG | unable to find host DHCP lease matching {name: "ha-959539-m02", mac: "52:54:00:7e:17:08", ip: "192.168.39.71"} in network mk-ha-959539
	I0924 00:00:37.827442   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Getting to WaitForSSH function...
	I0924 00:00:37.827474   26218 main.go:141] libmachine: (ha-959539-m02) Reserved static IP address: 192.168.39.71
	I0924 00:00:37.827486   26218 main.go:141] libmachine: (ha-959539-m02) Waiting for SSH to be available...
	I0924 00:00:37.830110   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.830505   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:37.830530   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.830672   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using SSH client type: external
	I0924 00:00:37.830710   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa (-rw-------)
	I0924 00:00:37.830778   26218 main.go:141] libmachine: (ha-959539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:00:37.830803   26218 main.go:141] libmachine: (ha-959539-m02) DBG | About to run SSH command:
	I0924 00:00:37.830826   26218 main.go:141] libmachine: (ha-959539-m02) DBG | exit 0
	I0924 00:00:37.960544   26218 main.go:141] libmachine: (ha-959539-m02) DBG | SSH cmd err, output: <nil>: 
	I0924 00:00:37.960821   26218 main.go:141] libmachine: (ha-959539-m02) KVM machine creation complete!
	I0924 00:00:37.961319   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:37.961983   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:37.962222   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:37.962419   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:00:37.962460   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetState
	I0924 00:00:37.963697   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:00:37.963714   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:00:37.963734   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:00:37.963742   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:37.966078   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.966462   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:37.966483   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:37.966660   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:37.966813   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:37.966945   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:37.967054   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:37.967205   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:37.967481   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:37.967492   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:00:38.079589   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:00:38.079610   26218 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:00:38.079617   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.082503   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.082929   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.082950   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.083140   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.083340   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.083509   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.083666   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.083825   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.083986   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.083997   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:00:38.197000   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:00:38.197103   26218 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:00:38.197116   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:00:38.197126   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.197376   26218 buildroot.go:166] provisioning hostname "ha-959539-m02"
	I0924 00:00:38.197411   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.197604   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.200444   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.200771   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.200795   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.200984   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.201176   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.201357   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.201493   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.201648   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.201800   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.201815   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539-m02 && echo "ha-959539-m02" | sudo tee /etc/hostname
	I0924 00:00:38.325460   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539-m02
	
	I0924 00:00:38.325485   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.328105   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.328475   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.328501   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.328664   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.328838   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.329112   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.329333   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.329513   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.329688   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.329704   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:00:38.449811   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:00:38.449850   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:00:38.449870   26218 buildroot.go:174] setting up certificates
	I0924 00:00:38.449890   26218 provision.go:84] configureAuth start
	I0924 00:00:38.449902   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetMachineName
	I0924 00:00:38.450206   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:38.453211   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.453603   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.453632   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.453799   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.456450   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.456868   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.456897   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.457045   26218 provision.go:143] copyHostCerts
	I0924 00:00:38.457081   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:00:38.457120   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:00:38.457131   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:00:38.457206   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:00:38.457299   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:00:38.457319   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:00:38.457327   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:00:38.457353   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:00:38.457401   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:00:38.457420   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:00:38.457427   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:00:38.457450   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:00:38.457543   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539-m02 san=[127.0.0.1 192.168.39.71 ha-959539-m02 localhost minikube]
	I0924 00:00:38.700010   26218 provision.go:177] copyRemoteCerts
	I0924 00:00:38.700077   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:00:38.700106   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.703047   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.703677   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.703706   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.703938   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.704136   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.704273   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.704412   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:38.790480   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:00:38.790557   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:00:38.814753   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:00:38.814837   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 00:00:38.838252   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:00:38.838325   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 00:00:38.861203   26218 provision.go:87] duration metric: took 411.299288ms to configureAuth
	I0924 00:00:38.861229   26218 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:00:38.861474   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:38.861569   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:38.864432   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.864889   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:38.864918   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:38.865150   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:38.865356   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.865560   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:38.865731   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:38.865903   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:38.866055   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:38.866068   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:00:39.108025   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:00:39.108048   26218 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:00:39.108055   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetURL
	I0924 00:00:39.109415   26218 main.go:141] libmachine: (ha-959539-m02) DBG | Using libvirt version 6000000
	I0924 00:00:39.111778   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.112117   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.112136   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.112442   26218 main.go:141] libmachine: Docker is up and running!
	I0924 00:00:39.112459   26218 main.go:141] libmachine: Reticulating splines...
	I0924 00:00:39.112465   26218 client.go:171] duration metric: took 24.035864378s to LocalClient.Create
	I0924 00:00:39.112488   26218 start.go:167] duration metric: took 24.035928123s to libmachine.API.Create "ha-959539"
	I0924 00:00:39.112505   26218 start.go:293] postStartSetup for "ha-959539-m02" (driver="kvm2")
	I0924 00:00:39.112530   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:00:39.112552   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.112758   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:00:39.112780   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.115333   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.115725   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.115753   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.115918   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.116088   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.116213   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.116357   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.202485   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:00:39.206952   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:00:39.206985   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:00:39.207071   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:00:39.207148   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:00:39.207163   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:00:39.207242   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:00:39.216574   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:00:39.239506   26218 start.go:296] duration metric: took 126.985038ms for postStartSetup
	I0924 00:00:39.239558   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetConfigRaw
	I0924 00:00:39.240153   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:39.242816   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.243178   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.243207   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.243507   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:00:39.243767   26218 start.go:128] duration metric: took 24.186030679s to createHost
	I0924 00:00:39.243797   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.246320   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.246794   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.246819   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.246947   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.247124   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.247283   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.247416   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.247561   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:00:39.247714   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0924 00:00:39.247724   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:00:39.360845   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136039.320054599
	
	I0924 00:00:39.360875   26218 fix.go:216] guest clock: 1727136039.320054599
	I0924 00:00:39.360884   26218 fix.go:229] Guest: 2024-09-24 00:00:39.320054599 +0000 UTC Remote: 2024-09-24 00:00:39.243782701 +0000 UTC m=+72.471728258 (delta=76.271898ms)
	I0924 00:00:39.360910   26218 fix.go:200] guest clock delta is within tolerance: 76.271898ms
	I0924 00:00:39.360916   26218 start.go:83] releasing machines lock for "ha-959539-m02", held for 24.303253954s
	I0924 00:00:39.360955   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.361201   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:39.363900   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.364402   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.364444   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.366881   26218 out.go:177] * Found network options:
	I0924 00:00:39.368856   26218 out.go:177]   - NO_PROXY=192.168.39.231
	W0924 00:00:39.370661   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:00:39.370699   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371263   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371455   26218 main.go:141] libmachine: (ha-959539-m02) Calling .DriverName
	I0924 00:00:39.371538   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:00:39.371594   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	W0924 00:00:39.371611   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:00:39.371685   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:00:39.371706   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHHostname
	I0924 00:00:39.374357   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374663   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374694   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.374712   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.374850   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.375045   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.375085   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:39.375111   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:39.375202   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.375362   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.375377   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHPort
	I0924 00:00:39.375561   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHKeyPath
	I0924 00:00:39.375696   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetSSHUsername
	I0924 00:00:39.375813   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m02/id_rsa Username:docker}
	I0924 00:00:39.627921   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:00:39.633495   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:00:39.633553   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:00:39.648951   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:00:39.648983   26218 start.go:495] detecting cgroup driver to use...
	I0924 00:00:39.649040   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:00:39.665083   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:00:39.679257   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:00:39.679308   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:00:39.692687   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:00:39.705979   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:00:39.817630   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:00:39.947466   26218 docker.go:233] disabling docker service ...
	I0924 00:00:39.947532   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:00:39.969264   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:00:39.982704   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:00:40.112775   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:00:40.227163   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:00:40.240677   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:00:40.258433   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:00:40.258483   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.268957   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:00:40.269028   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.279413   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.289512   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.299715   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:00:40.310010   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.320219   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.336748   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:00:40.346864   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:00:40.355761   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:00:40.355825   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:00:40.368724   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:00:40.378522   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:00:40.486107   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:00:40.577907   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:00:40.577981   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:00:40.582555   26218 start.go:563] Will wait 60s for crictl version
	I0924 00:00:40.582622   26218 ssh_runner.go:195] Run: which crictl
	I0924 00:00:40.586219   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:00:40.622719   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:00:40.622812   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:00:40.650450   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:00:40.681082   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:00:40.682576   26218 out.go:177]   - env NO_PROXY=192.168.39.231
	I0924 00:00:40.683809   26218 main.go:141] libmachine: (ha-959539-m02) Calling .GetIP
	I0924 00:00:40.686666   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:40.687065   26218 main.go:141] libmachine: (ha-959539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:17:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:00:29 +0000 UTC Type:0 Mac:52:54:00:7e:17:08 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ha-959539-m02 Clientid:01:52:54:00:7e:17:08}
	I0924 00:00:40.687087   26218 main.go:141] libmachine: (ha-959539-m02) DBG | domain ha-959539-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:7e:17:08 in network mk-ha-959539
	I0924 00:00:40.687306   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:00:40.691475   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:00:40.703474   26218 mustload.go:65] Loading cluster: ha-959539
	I0924 00:00:40.703695   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:00:40.703966   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:40.704003   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:40.718859   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0924 00:00:40.719296   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:40.719825   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:40.719845   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:40.720145   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:40.720370   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:00:40.721815   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:40.722094   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:40.722128   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:40.736945   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0924 00:00:40.737421   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:40.737905   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:40.737924   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:40.738222   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:40.738511   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:40.738689   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.71
	I0924 00:00:40.738704   26218 certs.go:194] generating shared ca certs ...
	I0924 00:00:40.738719   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:40.738861   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:00:40.738903   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:00:40.738915   26218 certs.go:256] generating profile certs ...
	I0924 00:00:40.738991   26218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:00:40.739018   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0
	I0924 00:00:40.739035   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.254]
	I0924 00:00:41.143984   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 ...
	I0924 00:00:41.144014   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0: {Name:mk20b6843b0401b0c56e7890c984fa68d261314f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:41.144175   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0 ...
	I0924 00:00:41.144188   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0: {Name:mk7575fb7ddfde936c86d46545e958478f16edb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:00:41.144260   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.b2e74be0 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:00:41.144430   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.b2e74be0 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:00:41.144555   26218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:00:41.144571   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:00:41.144584   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:00:41.144594   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:00:41.144605   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:00:41.144615   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:00:41.144625   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:00:41.144635   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:00:41.144645   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:00:41.144688   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:00:41.144720   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:00:41.144729   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:00:41.144749   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:00:41.144772   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:00:41.144793   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:00:41.144829   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:00:41.144853   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.144868   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.144880   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.144915   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:41.148030   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:41.148427   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:41.148454   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:41.148614   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:41.148808   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:41.149000   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:41.149135   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:41.228803   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 00:00:41.233988   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 00:00:41.244943   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 00:00:41.249126   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0924 00:00:41.259697   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 00:00:41.263836   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 00:00:41.275144   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 00:00:41.279454   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 00:00:41.290396   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 00:00:41.295094   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 00:00:41.307082   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 00:00:41.310877   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 00:00:41.325438   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:00:41.350629   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:00:41.374907   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:00:41.399716   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:00:41.424061   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 00:00:41.447992   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:00:41.471662   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:00:41.494955   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:00:41.517872   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:00:41.540286   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:00:41.563177   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:00:41.585906   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 00:00:41.601283   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0924 00:00:41.617635   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 00:00:41.633218   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 00:00:41.648995   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 00:00:41.664675   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 00:00:41.680596   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 00:00:41.696250   26218 ssh_runner.go:195] Run: openssl version
	I0924 00:00:41.701694   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:00:41.711789   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.716030   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.716101   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:00:41.721933   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:00:41.732158   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:00:41.742443   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.746788   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.746839   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:00:41.752121   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:00:41.763012   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:00:41.774793   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.779310   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.779366   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:00:41.784990   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:00:41.795333   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:00:41.799293   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:00:41.799344   26218 kubeadm.go:934] updating node {m02 192.168.39.71 8443 v1.31.1 crio true true} ...
	I0924 00:00:41.799409   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:00:41.799432   26218 kube-vip.go:115] generating kube-vip config ...
	I0924 00:00:41.799464   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:00:41.816587   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:00:41.816663   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:00:41.816743   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:00:41.827548   26218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 00:00:41.827613   26218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 00:00:41.837289   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 00:00:41.837325   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:00:41.837335   26218 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0924 00:00:41.837374   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:00:41.837335   26218 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0924 00:00:41.841429   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 00:00:41.841451   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 00:00:42.671785   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:00:42.671868   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:00:42.676727   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 00:00:42.676769   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 00:00:42.782086   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:00:42.829038   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:00:42.829147   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:00:42.840769   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 00:00:42.840809   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 00:00:43.263339   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 00:00:43.276175   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0924 00:00:43.295973   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:00:43.314983   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:00:43.331751   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:00:43.335923   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:00:43.347682   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:00:43.465742   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:00:43.485298   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:00:43.485784   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:00:43.485844   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:00:43.501576   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46265
	I0924 00:00:43.502143   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:00:43.502637   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:00:43.502661   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:00:43.502992   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:00:43.503177   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:00:43.503343   26218 start.go:317] joinCluster: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:00:43.503440   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 00:00:43.503454   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:00:43.506923   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:43.507450   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:00:43.507479   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:00:43.507654   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:00:43.507814   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:00:43.507940   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:00:43.508061   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:00:43.662724   26218 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:00:43.662763   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pid2mx.knnb3pqsxosow7jx --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m02 --control-plane --apiserver-advertise-address=192.168.39.71 --apiserver-bind-port=8443"
	I0924 00:01:07.367829   26218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pid2mx.knnb3pqsxosow7jx --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m02 --control-plane --apiserver-advertise-address=192.168.39.71 --apiserver-bind-port=8443": (23.705046169s)
	I0924 00:01:07.367865   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 00:01:07.953375   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539-m02 minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=false
	I0924 00:01:08.091888   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-959539-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 00:01:08.215534   26218 start.go:319] duration metric: took 24.71218473s to joinCluster
	I0924 00:01:08.215627   26218 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:01:08.215925   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:01:08.218104   26218 out.go:177] * Verifying Kubernetes components...
	I0924 00:01:08.219304   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:01:08.515326   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:01:08.536625   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:01:08.536894   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 00:01:08.536951   26218 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.231:8443
	I0924 00:01:08.537167   26218 node_ready.go:35] waiting up to 6m0s for node "ha-959539-m02" to be "Ready" ...
	I0924 00:01:08.537285   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:08.537301   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:08.537312   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:08.537318   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:08.545839   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:01:09.037697   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:09.037724   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:09.037735   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:09.037744   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:09.045511   26218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0924 00:01:09.538147   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:09.538175   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:09.538188   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:09.538195   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:09.545313   26218 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0924 00:01:10.038238   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:10.038262   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:10.038270   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:10.038274   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:10.041715   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:10.538175   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:10.538205   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:10.538219   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:10.538224   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:10.541872   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:10.542370   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:11.037630   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:11.037679   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:11.037691   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:11.037696   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:11.041245   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:11.538259   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:11.538294   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:11.538302   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:11.538307   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:11.541611   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:12.038188   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:12.038209   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:12.038216   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:12.038221   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:12.041674   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:12.537618   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:12.537637   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:12.537645   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:12.537655   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:12.541319   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:13.037995   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:13.038016   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:13.038025   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:13.038028   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:13.041345   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:13.042019   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:13.537769   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:13.537794   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:13.537805   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:13.537811   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:13.541685   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:14.037855   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:14.037878   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:14.037887   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:14.037891   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:14.288753   26218 round_trippers.go:574] Response Status: 200 OK in 250 milliseconds
	I0924 00:01:14.538102   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:14.538126   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:14.538137   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:14.538145   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:14.541469   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.037484   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:15.037516   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:15.037537   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:15.037541   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:15.040833   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.537646   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:15.537676   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:15.537694   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:15.537700   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:15.541088   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:15.541719   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:16.037867   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:16.037898   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:16.037910   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:16.037916   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:16.041934   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:16.537983   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:16.538008   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:16.538018   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:16.538026   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:16.542888   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:17.037795   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:17.037815   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:17.037823   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:17.037826   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:17.040833   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:17.537691   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:17.537714   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:17.537721   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:17.537727   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:17.540858   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:18.037970   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:18.037995   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:18.038031   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:18.038036   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:18.041329   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:18.042104   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:18.537909   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:18.537934   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:18.537947   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:18.537953   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:18.541524   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:19.037353   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:19.037406   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:19.037417   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:19.037421   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:19.040693   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:19.537691   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:19.537713   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:19.537721   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:19.537725   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:19.541362   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:20.038258   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:20.038281   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:20.038289   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:20.038293   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:20.041505   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:20.042205   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:20.538173   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:20.538196   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:20.538204   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:20.538208   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:20.541444   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:21.038308   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:21.038332   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:21.038340   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:21.038345   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:21.041591   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:21.537466   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:21.537490   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:21.537498   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:21.537507   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:21.541243   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:22.037776   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:22.037798   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:22.037806   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:22.037809   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:22.041584   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:22.537387   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:22.537410   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:22.537419   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:22.537423   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:22.540436   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:22.540915   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:23.038376   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:23.038396   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:23.038404   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:23.038408   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:23.042386   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:23.537841   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:23.537863   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:23.537871   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:23.537876   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:23.540735   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:24.037766   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:24.037791   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:24.037800   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:24.037805   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:24.041574   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:24.537636   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:24.537662   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:24.537674   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:24.537679   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:24.540714   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:24.541302   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:25.037447   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:25.037470   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:25.037487   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:25.037491   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:25.040959   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:25.538316   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:25.538358   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:25.538366   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:25.538370   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:25.542089   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.037942   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:26.037965   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:26.037972   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:26.037977   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:26.041187   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.538316   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:26.538337   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:26.538344   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:26.538347   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:26.541682   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:26.542279   26218 node_ready.go:53] node "ha-959539-m02" has status "Ready":"False"
	I0924 00:01:27.037486   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.037511   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.037519   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.037523   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.040661   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.041287   26218 node_ready.go:49] node "ha-959539-m02" has status "Ready":"True"
	I0924 00:01:27.041311   26218 node_ready.go:38] duration metric: took 18.504110454s for node "ha-959539-m02" to be "Ready" ...
	I0924 00:01:27.041320   26218 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:01:27.041412   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:27.041422   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.041429   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.041433   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.045587   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:27.053524   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.053610   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nkbzw
	I0924 00:01:27.053618   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.053626   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.053630   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.056737   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.057414   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.057431   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.057440   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.057448   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.059974   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.060671   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.060693   26218 pod_ready.go:82] duration metric: took 7.143278ms for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.060705   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.060770   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ss8lg
	I0924 00:01:27.060779   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.060786   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.060789   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.063296   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.064025   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.064042   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.064052   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.064057   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.066509   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.067043   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.067072   26218 pod_ready.go:82] duration metric: took 6.358417ms for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.067085   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.067169   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539
	I0924 00:01:27.067180   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.067191   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.067197   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.069632   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.070349   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.070365   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.070372   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.070376   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.072726   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.073202   26218 pod_ready.go:93] pod "etcd-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.073221   26218 pod_ready.go:82] duration metric: took 6.128232ms for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.073233   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.073304   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m02
	I0924 00:01:27.073314   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.073325   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.073334   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.075606   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.076170   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.076186   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.076196   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.076203   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.078974   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:01:27.079404   26218 pod_ready.go:93] pod "etcd-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.079423   26218 pod_ready.go:82] duration metric: took 6.178632ms for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.079441   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.237846   26218 request.go:632] Waited for 158.344773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:01:27.237906   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:01:27.237912   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.237919   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.237923   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.241325   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.438393   26218 request.go:632] Waited for 196.447833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.438479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:27.438489   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.438501   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.438509   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.447385   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:01:27.447843   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.447861   26218 pod_ready.go:82] duration metric: took 368.411985ms for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.447873   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.638213   26218 request.go:632] Waited for 190.264015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:01:27.638314   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:01:27.638323   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.638331   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.638335   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.641724   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.837671   26218 request.go:632] Waited for 195.307183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.837734   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:27.837741   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:27.837750   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:27.837755   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:27.841548   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:27.842107   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:27.842125   26218 pod_ready.go:82] duration metric: took 394.244431ms for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:27.842138   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.038308   26218 request.go:632] Waited for 196.100963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:01:28.038387   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:01:28.038399   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.038408   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.038413   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.041906   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.238014   26218 request.go:632] Waited for 195.403449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:28.238083   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:28.238090   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.238099   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.238104   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.241379   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.241947   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:28.241968   26218 pod_ready.go:82] duration metric: took 399.822644ms for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.241981   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.438107   26218 request.go:632] Waited for 196.054162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:01:28.438177   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:01:28.438183   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.438190   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.438194   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.441695   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.637747   26218 request.go:632] Waited for 195.402574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:28.637812   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:28.637820   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.637829   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.637836   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.641728   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:28.642165   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:28.642185   26218 pod_ready.go:82] duration metric: took 400.196003ms for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.642198   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:28.838364   26218 request.go:632] Waited for 196.098536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:01:28.838423   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:01:28.838429   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:28.838440   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:28.838445   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:28.842064   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.038288   26218 request.go:632] Waited for 195.408876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:29.038362   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:29.038367   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.038375   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.038380   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.041612   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.042184   26218 pod_ready.go:93] pod "kube-proxy-2hlqx" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.042207   26218 pod_ready.go:82] duration metric: took 400.003061ms for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.042217   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.238379   26218 request.go:632] Waited for 196.098313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:01:29.238479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:01:29.238489   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.238500   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.238510   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.241789   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.437898   26218 request.go:632] Waited for 195.388277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.437950   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.437962   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.437970   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.437982   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.441497   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.442152   26218 pod_ready.go:93] pod "kube-proxy-qzklc" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.442170   26218 pod_ready.go:82] duration metric: took 399.946814ms for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.442179   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.638206   26218 request.go:632] Waited for 195.95793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:01:29.638276   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:01:29.638285   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.638295   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.638300   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.641784   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.837816   26218 request.go:632] Waited for 195.394257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.837907   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:01:29.837916   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:29.837926   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:29.837932   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:29.841128   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:29.841709   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:29.841729   26218 pod_ready.go:82] duration metric: took 399.544232ms for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:29.841739   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:30.037891   26218 request.go:632] Waited for 196.07048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:01:30.037962   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:01:30.037970   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.037980   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.037987   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.041465   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:30.237753   26218 request.go:632] Waited for 195.552862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:30.237806   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:01:30.237812   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.237819   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.237823   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.240960   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:30.241506   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:01:30.241525   26218 pod_ready.go:82] duration metric: took 399.780224ms for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:01:30.241536   26218 pod_ready.go:39] duration metric: took 3.200205293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:01:30.241549   26218 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:01:30.241608   26218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:01:30.261278   26218 api_server.go:72] duration metric: took 22.045614649s to wait for apiserver process to appear ...
	I0924 00:01:30.261301   26218 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:01:30.261325   26218 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0924 00:01:30.266130   26218 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0924 00:01:30.266207   26218 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I0924 00:01:30.266217   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.266227   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.266234   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.267131   26218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 00:01:30.267273   26218 api_server.go:141] control plane version: v1.31.1
	I0924 00:01:30.267296   26218 api_server.go:131] duration metric: took 5.986583ms to wait for apiserver health ...
	I0924 00:01:30.267305   26218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:01:30.437651   26218 request.go:632] Waited for 170.278154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.437728   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.437734   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.437752   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.437756   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.443228   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:01:30.447360   26218 system_pods.go:59] 17 kube-system pods found
	I0924 00:01:30.447395   26218 system_pods.go:61] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:01:30.447400   26218 system_pods.go:61] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:01:30.447404   26218 system_pods.go:61] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:01:30.447407   26218 system_pods.go:61] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:01:30.447410   26218 system_pods.go:61] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:01:30.447413   26218 system_pods.go:61] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:01:30.447417   26218 system_pods.go:61] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:01:30.447420   26218 system_pods.go:61] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:01:30.447422   26218 system_pods.go:61] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:01:30.447427   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:01:30.447430   26218 system_pods.go:61] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:01:30.447433   26218 system_pods.go:61] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:01:30.447436   26218 system_pods.go:61] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:01:30.447439   26218 system_pods.go:61] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:01:30.447442   26218 system_pods.go:61] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:01:30.447445   26218 system_pods.go:61] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:01:30.447448   26218 system_pods.go:61] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:01:30.447453   26218 system_pods.go:74] duration metric: took 180.140131ms to wait for pod list to return data ...
	I0924 00:01:30.447461   26218 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:01:30.637950   26218 request.go:632] Waited for 190.394034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:01:30.638006   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:01:30.638012   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.638022   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.638028   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.642084   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:30.642345   26218 default_sa.go:45] found service account: "default"
	I0924 00:01:30.642362   26218 default_sa.go:55] duration metric: took 194.895557ms for default service account to be created ...
	I0924 00:01:30.642370   26218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:01:30.838482   26218 request.go:632] Waited for 196.04318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.838565   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:01:30.838573   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:30.838585   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:30.838597   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:30.842832   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:01:30.848939   26218 system_pods.go:86] 17 kube-system pods found
	I0924 00:01:30.848970   26218 system_pods.go:89] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:01:30.848979   26218 system_pods.go:89] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:01:30.848983   26218 system_pods.go:89] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:01:30.848988   26218 system_pods.go:89] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:01:30.848991   26218 system_pods.go:89] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:01:30.848995   26218 system_pods.go:89] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:01:30.848999   26218 system_pods.go:89] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:01:30.849002   26218 system_pods.go:89] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:01:30.849006   26218 system_pods.go:89] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:01:30.849009   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:01:30.849014   26218 system_pods.go:89] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:01:30.849019   26218 system_pods.go:89] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:01:30.849023   26218 system_pods.go:89] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:01:30.849027   26218 system_pods.go:89] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:01:30.849031   26218 system_pods.go:89] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:01:30.849034   26218 system_pods.go:89] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:01:30.849039   26218 system_pods.go:89] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:01:30.849049   26218 system_pods.go:126] duration metric: took 206.674401ms to wait for k8s-apps to be running ...
	I0924 00:01:30.849059   26218 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:01:30.849103   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:01:30.865711   26218 system_svc.go:56] duration metric: took 16.641461ms WaitForService to wait for kubelet
	I0924 00:01:30.865749   26218 kubeadm.go:582] duration metric: took 22.650087813s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:01:30.865771   26218 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:01:31.038193   26218 request.go:632] Waited for 172.328437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I0924 00:01:31.038258   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I0924 00:01:31.038266   26218 round_trippers.go:469] Request Headers:
	I0924 00:01:31.038277   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:01:31.038283   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:01:31.042103   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:01:31.042950   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:01:31.042977   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:01:31.042995   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:01:31.042998   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:01:31.043002   26218 node_conditions.go:105] duration metric: took 177.226673ms to run NodePressure ...
	I0924 00:01:31.043015   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:01:31.043037   26218 start.go:255] writing updated cluster config ...
	I0924 00:01:31.044981   26218 out.go:201] 
	I0924 00:01:31.046376   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:01:31.046461   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:01:31.048054   26218 out.go:177] * Starting "ha-959539-m03" control-plane node in "ha-959539" cluster
	I0924 00:01:31.049402   26218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:01:31.049432   26218 cache.go:56] Caching tarball of preloaded images
	I0924 00:01:31.049548   26218 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:01:31.049578   26218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:01:31.049684   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:01:31.049896   26218 start.go:360] acquireMachinesLock for ha-959539-m03: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:01:31.049951   26218 start.go:364] duration metric: took 34.777µs to acquireMachinesLock for "ha-959539-m03"
	I0924 00:01:31.049975   26218 start.go:93] Provisioning new machine with config: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:01:31.050075   26218 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0924 00:01:31.051498   26218 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:01:31.051601   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:01:31.051641   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:01:31.066868   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0924 00:01:31.067407   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:01:31.067856   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:01:31.067875   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:01:31.068226   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:01:31.068427   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:31.068578   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:31.068733   26218 start.go:159] libmachine.API.Create for "ha-959539" (driver="kvm2")
	I0924 00:01:31.068760   26218 client.go:168] LocalClient.Create starting
	I0924 00:01:31.068788   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:01:31.068825   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:01:31.068839   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:01:31.068884   26218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:01:31.068903   26218 main.go:141] libmachine: Decoding PEM data...
	I0924 00:01:31.068913   26218 main.go:141] libmachine: Parsing certificate...
	I0924 00:01:31.068925   26218 main.go:141] libmachine: Running pre-create checks...
	I0924 00:01:31.068932   26218 main.go:141] libmachine: (ha-959539-m03) Calling .PreCreateCheck
	I0924 00:01:31.069147   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:01:31.069509   26218 main.go:141] libmachine: Creating machine...
	I0924 00:01:31.069521   26218 main.go:141] libmachine: (ha-959539-m03) Calling .Create
	I0924 00:01:31.069666   26218 main.go:141] libmachine: (ha-959539-m03) Creating KVM machine...
	I0924 00:01:31.071131   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found existing default KVM network
	I0924 00:01:31.071307   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found existing private KVM network mk-ha-959539
	I0924 00:01:31.071526   26218 main.go:141] libmachine: (ha-959539-m03) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 ...
	I0924 00:01:31.071549   26218 main.go:141] libmachine: (ha-959539-m03) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:01:31.071644   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.071506   26982 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:01:31.071719   26218 main.go:141] libmachine: (ha-959539-m03) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:01:31.300380   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.300219   26982 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa...
	I0924 00:01:31.604410   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.604272   26982 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/ha-959539-m03.rawdisk...
	I0924 00:01:31.604443   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Writing magic tar header
	I0924 00:01:31.604464   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Writing SSH key tar header
	I0924 00:01:31.604477   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:31.604403   26982 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 ...
	I0924 00:01:31.604563   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03
	I0924 00:01:31.604595   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03 (perms=drwx------)
	I0924 00:01:31.604614   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:01:31.604630   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:01:31.604641   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:01:31.604654   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:01:31.604668   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:01:31.604679   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:01:31.604689   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Checking permissions on dir: /home
	I0924 00:01:31.604701   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:01:31.604718   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:01:31.604730   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:01:31.604746   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Skipping /home - not owner
	I0924 00:01:31.604758   26218 main.go:141] libmachine: (ha-959539-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:01:31.604771   26218 main.go:141] libmachine: (ha-959539-m03) Creating domain...
	I0924 00:01:31.605736   26218 main.go:141] libmachine: (ha-959539-m03) define libvirt domain using xml: 
	I0924 00:01:31.605756   26218 main.go:141] libmachine: (ha-959539-m03) <domain type='kvm'>
	I0924 00:01:31.605766   26218 main.go:141] libmachine: (ha-959539-m03)   <name>ha-959539-m03</name>
	I0924 00:01:31.605777   26218 main.go:141] libmachine: (ha-959539-m03)   <memory unit='MiB'>2200</memory>
	I0924 00:01:31.605784   26218 main.go:141] libmachine: (ha-959539-m03)   <vcpu>2</vcpu>
	I0924 00:01:31.605794   26218 main.go:141] libmachine: (ha-959539-m03)   <features>
	I0924 00:01:31.605802   26218 main.go:141] libmachine: (ha-959539-m03)     <acpi/>
	I0924 00:01:31.605808   26218 main.go:141] libmachine: (ha-959539-m03)     <apic/>
	I0924 00:01:31.605816   26218 main.go:141] libmachine: (ha-959539-m03)     <pae/>
	I0924 00:01:31.605822   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.605829   26218 main.go:141] libmachine: (ha-959539-m03)   </features>
	I0924 00:01:31.605840   26218 main.go:141] libmachine: (ha-959539-m03)   <cpu mode='host-passthrough'>
	I0924 00:01:31.605848   26218 main.go:141] libmachine: (ha-959539-m03)   
	I0924 00:01:31.605857   26218 main.go:141] libmachine: (ha-959539-m03)   </cpu>
	I0924 00:01:31.605887   26218 main.go:141] libmachine: (ha-959539-m03)   <os>
	I0924 00:01:31.605911   26218 main.go:141] libmachine: (ha-959539-m03)     <type>hvm</type>
	I0924 00:01:31.605921   26218 main.go:141] libmachine: (ha-959539-m03)     <boot dev='cdrom'/>
	I0924 00:01:31.605928   26218 main.go:141] libmachine: (ha-959539-m03)     <boot dev='hd'/>
	I0924 00:01:31.605940   26218 main.go:141] libmachine: (ha-959539-m03)     <bootmenu enable='no'/>
	I0924 00:01:31.605950   26218 main.go:141] libmachine: (ha-959539-m03)   </os>
	I0924 00:01:31.605957   26218 main.go:141] libmachine: (ha-959539-m03)   <devices>
	I0924 00:01:31.605968   26218 main.go:141] libmachine: (ha-959539-m03)     <disk type='file' device='cdrom'>
	I0924 00:01:31.605980   26218 main.go:141] libmachine: (ha-959539-m03)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/boot2docker.iso'/>
	I0924 00:01:31.606000   26218 main.go:141] libmachine: (ha-959539-m03)       <target dev='hdc' bus='scsi'/>
	I0924 00:01:31.606012   26218 main.go:141] libmachine: (ha-959539-m03)       <readonly/>
	I0924 00:01:31.606020   26218 main.go:141] libmachine: (ha-959539-m03)     </disk>
	I0924 00:01:31.606029   26218 main.go:141] libmachine: (ha-959539-m03)     <disk type='file' device='disk'>
	I0924 00:01:31.606038   26218 main.go:141] libmachine: (ha-959539-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:01:31.606049   26218 main.go:141] libmachine: (ha-959539-m03)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/ha-959539-m03.rawdisk'/>
	I0924 00:01:31.606056   26218 main.go:141] libmachine: (ha-959539-m03)       <target dev='hda' bus='virtio'/>
	I0924 00:01:31.606063   26218 main.go:141] libmachine: (ha-959539-m03)     </disk>
	I0924 00:01:31.606074   26218 main.go:141] libmachine: (ha-959539-m03)     <interface type='network'>
	I0924 00:01:31.606086   26218 main.go:141] libmachine: (ha-959539-m03)       <source network='mk-ha-959539'/>
	I0924 00:01:31.606092   26218 main.go:141] libmachine: (ha-959539-m03)       <model type='virtio'/>
	I0924 00:01:31.606103   26218 main.go:141] libmachine: (ha-959539-m03)     </interface>
	I0924 00:01:31.606118   26218 main.go:141] libmachine: (ha-959539-m03)     <interface type='network'>
	I0924 00:01:31.606130   26218 main.go:141] libmachine: (ha-959539-m03)       <source network='default'/>
	I0924 00:01:31.606140   26218 main.go:141] libmachine: (ha-959539-m03)       <model type='virtio'/>
	I0924 00:01:31.606179   26218 main.go:141] libmachine: (ha-959539-m03)     </interface>
	I0924 00:01:31.606200   26218 main.go:141] libmachine: (ha-959539-m03)     <serial type='pty'>
	I0924 00:01:31.606212   26218 main.go:141] libmachine: (ha-959539-m03)       <target port='0'/>
	I0924 00:01:31.606222   26218 main.go:141] libmachine: (ha-959539-m03)     </serial>
	I0924 00:01:31.606234   26218 main.go:141] libmachine: (ha-959539-m03)     <console type='pty'>
	I0924 00:01:31.606244   26218 main.go:141] libmachine: (ha-959539-m03)       <target type='serial' port='0'/>
	I0924 00:01:31.606252   26218 main.go:141] libmachine: (ha-959539-m03)     </console>
	I0924 00:01:31.606259   26218 main.go:141] libmachine: (ha-959539-m03)     <rng model='virtio'>
	I0924 00:01:31.606268   26218 main.go:141] libmachine: (ha-959539-m03)       <backend model='random'>/dev/random</backend>
	I0924 00:01:31.606286   26218 main.go:141] libmachine: (ha-959539-m03)     </rng>
	I0924 00:01:31.606292   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.606297   26218 main.go:141] libmachine: (ha-959539-m03)     
	I0924 00:01:31.606304   26218 main.go:141] libmachine: (ha-959539-m03)   </devices>
	I0924 00:01:31.606310   26218 main.go:141] libmachine: (ha-959539-m03) </domain>
	I0924 00:01:31.606319   26218 main.go:141] libmachine: (ha-959539-m03) 
	I0924 00:01:31.613294   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:e5:53:3a in network default
	I0924 00:01:31.613858   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring networks are active...
	I0924 00:01:31.613884   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:31.614594   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring network default is active
	I0924 00:01:31.614852   26218 main.go:141] libmachine: (ha-959539-m03) Ensuring network mk-ha-959539 is active
	I0924 00:01:31.615281   26218 main.go:141] libmachine: (ha-959539-m03) Getting domain xml...
	I0924 00:01:31.616154   26218 main.go:141] libmachine: (ha-959539-m03) Creating domain...
	I0924 00:01:32.869701   26218 main.go:141] libmachine: (ha-959539-m03) Waiting to get IP...
	I0924 00:01:32.870597   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:32.871006   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:32.871035   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:32.870993   26982 retry.go:31] will retry after 233.012319ms: waiting for machine to come up
	I0924 00:01:33.105550   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.105977   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.106051   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.105911   26982 retry.go:31] will retry after 379.213431ms: waiting for machine to come up
	I0924 00:01:33.486484   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.487004   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.487032   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.486952   26982 retry.go:31] will retry after 425.287824ms: waiting for machine to come up
	I0924 00:01:33.913409   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:33.913794   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:33.913822   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:33.913744   26982 retry.go:31] will retry after 517.327433ms: waiting for machine to come up
	I0924 00:01:34.432365   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:34.432967   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:34.432990   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:34.432933   26982 retry.go:31] will retry after 602.673221ms: waiting for machine to come up
	I0924 00:01:35.036831   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:35.037345   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:35.037375   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:35.037323   26982 retry.go:31] will retry after 797.600229ms: waiting for machine to come up
	I0924 00:01:35.836744   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:35.837147   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:35.837167   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:35.837118   26982 retry.go:31] will retry after 961.577188ms: waiting for machine to come up
	I0924 00:01:36.800289   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:36.800667   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:36.800730   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:36.800639   26982 retry.go:31] will retry after 936.999629ms: waiting for machine to come up
	I0924 00:01:37.740480   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:37.740978   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:37.741002   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:37.740949   26982 retry.go:31] will retry after 1.346163433s: waiting for machine to come up
	I0924 00:01:39.089423   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:39.089867   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:39.089892   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:39.089852   26982 retry.go:31] will retry after 1.874406909s: waiting for machine to come up
	I0924 00:01:40.965400   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:40.965872   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:40.965892   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:40.965827   26982 retry.go:31] will retry after 2.811212351s: waiting for machine to come up
	I0924 00:01:43.780398   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:43.780984   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:43.781006   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:43.780942   26982 retry.go:31] will retry after 2.831259444s: waiting for machine to come up
	I0924 00:01:46.613330   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:46.613716   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:46.613743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:46.613670   26982 retry.go:31] will retry after 4.008768327s: waiting for machine to come up
	I0924 00:01:50.626829   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:50.627309   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find current IP address of domain ha-959539-m03 in network mk-ha-959539
	I0924 00:01:50.627329   26218 main.go:141] libmachine: (ha-959539-m03) DBG | I0924 00:01:50.627284   26982 retry.go:31] will retry after 5.442842747s: waiting for machine to come up
	I0924 00:01:56.073321   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.073934   26218 main.go:141] libmachine: (ha-959539-m03) Found IP for machine: 192.168.39.244
	I0924 00:01:56.073959   26218 main.go:141] libmachine: (ha-959539-m03) Reserving static IP address...
	I0924 00:01:56.073972   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has current primary IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.074620   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find host DHCP lease matching {name: "ha-959539-m03", mac: "52:54:00:b3:b3:10", ip: "192.168.39.244"} in network mk-ha-959539
	I0924 00:01:56.148126   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Getting to WaitForSSH function...
	I0924 00:01:56.148154   26218 main.go:141] libmachine: (ha-959539-m03) Reserved static IP address: 192.168.39.244
	I0924 00:01:56.148166   26218 main.go:141] libmachine: (ha-959539-m03) Waiting for SSH to be available...
	I0924 00:01:56.150613   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:56.150941   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539
	I0924 00:01:56.150968   26218 main.go:141] libmachine: (ha-959539-m03) DBG | unable to find defined IP address of network mk-ha-959539 interface with MAC address 52:54:00:b3:b3:10
	I0924 00:01:56.151093   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH client type: external
	I0924 00:01:56.151120   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa (-rw-------)
	I0924 00:01:56.151154   26218 main.go:141] libmachine: (ha-959539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:01:56.151177   26218 main.go:141] libmachine: (ha-959539-m03) DBG | About to run SSH command:
	I0924 00:01:56.151208   26218 main.go:141] libmachine: (ha-959539-m03) DBG | exit 0
	I0924 00:01:56.154778   26218 main.go:141] libmachine: (ha-959539-m03) DBG | SSH cmd err, output: exit status 255: 
	I0924 00:01:56.154798   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0924 00:01:56.154804   26218 main.go:141] libmachine: (ha-959539-m03) DBG | command : exit 0
	I0924 00:01:56.154809   26218 main.go:141] libmachine: (ha-959539-m03) DBG | err     : exit status 255
	I0924 00:01:56.154815   26218 main.go:141] libmachine: (ha-959539-m03) DBG | output  : 
	I0924 00:01:59.156489   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Getting to WaitForSSH function...
	I0924 00:01:59.159051   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.159534   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.159562   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.159701   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH client type: external
	I0924 00:01:59.159729   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa (-rw-------)
	I0924 00:01:59.159765   26218 main.go:141] libmachine: (ha-959539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:01:59.159777   26218 main.go:141] libmachine: (ha-959539-m03) DBG | About to run SSH command:
	I0924 00:01:59.159792   26218 main.go:141] libmachine: (ha-959539-m03) DBG | exit 0
	I0924 00:01:59.281025   26218 main.go:141] libmachine: (ha-959539-m03) DBG | SSH cmd err, output: <nil>: 
	I0924 00:01:59.281279   26218 main.go:141] libmachine: (ha-959539-m03) KVM machine creation complete!
	I0924 00:01:59.281741   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:01:59.282322   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:59.282554   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:01:59.282757   26218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:01:59.282778   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetState
	I0924 00:01:59.284086   26218 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:01:59.284107   26218 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:01:59.284112   26218 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:01:59.284118   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.286743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.287263   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.287293   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.287431   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.287597   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.287746   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.287899   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.288060   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.288359   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.288379   26218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:01:59.383651   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:01:59.383678   26218 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:01:59.383688   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.386650   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.387045   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.387065   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.387209   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.387419   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.387618   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.387773   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.387925   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.388113   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.388127   26218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:01:59.485025   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:01:59.485108   26218 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:01:59.485117   26218 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:01:59.485124   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.485390   26218 buildroot.go:166] provisioning hostname "ha-959539-m03"
	I0924 00:01:59.485417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.485578   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.487705   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.488135   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.488163   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.488390   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.488541   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.488687   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.488842   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.489001   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.489173   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.489184   26218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539-m03 && echo "ha-959539-m03" | sudo tee /etc/hostname
	I0924 00:01:59.598289   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539-m03
	
	I0924 00:01:59.598334   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.601336   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.601720   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.601752   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.601887   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:01:59.602080   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.602282   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:01:59.602440   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:01:59.602632   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:01:59.602835   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:01:59.602851   26218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:01:59.709318   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:01:59.709354   26218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:01:59.709368   26218 buildroot.go:174] setting up certificates
	I0924 00:01:59.709376   26218 provision.go:84] configureAuth start
	I0924 00:01:59.709384   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetMachineName
	I0924 00:01:59.709684   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:01:59.712295   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.712675   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.712707   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.712820   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:01:59.715173   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.715598   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:01:59.715627   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:01:59.715766   26218 provision.go:143] copyHostCerts
	I0924 00:01:59.715804   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:01:59.715840   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:01:59.715850   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:01:59.715947   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:01:59.716026   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:01:59.716046   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:01:59.716054   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:01:59.716080   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:01:59.716129   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:01:59.716149   26218 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:01:59.716156   26218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:01:59.716181   26218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:01:59.716234   26218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539-m03 san=[127.0.0.1 192.168.39.244 ha-959539-m03 localhost minikube]
	I0924 00:02:00.004700   26218 provision.go:177] copyRemoteCerts
	I0924 00:02:00.004758   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:02:00.004780   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.008103   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.008547   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.008578   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.008786   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.008992   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.009141   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.009273   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.090471   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:02:00.090557   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:02:00.113842   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:02:00.113915   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 00:02:00.136379   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:02:00.136447   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:02:00.158911   26218 provision.go:87] duration metric: took 449.525192ms to configureAuth
	I0924 00:02:00.158938   26218 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:02:00.159116   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:00.159181   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.161958   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.162260   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.162300   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.162497   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.162693   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.162991   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.163119   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.163316   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:02:00.163504   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:02:00.163521   26218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:02:00.384084   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:02:00.384116   26218 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:02:00.384137   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetURL
	I0924 00:02:00.385753   26218 main.go:141] libmachine: (ha-959539-m03) DBG | Using libvirt version 6000000
	I0924 00:02:00.388406   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.388802   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.388830   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.388972   26218 main.go:141] libmachine: Docker is up and running!
	I0924 00:02:00.389000   26218 main.go:141] libmachine: Reticulating splines...
	I0924 00:02:00.389008   26218 client.go:171] duration metric: took 29.320240775s to LocalClient.Create
	I0924 00:02:00.389034   26218 start.go:167] duration metric: took 29.320301121s to libmachine.API.Create "ha-959539"
	I0924 00:02:00.389045   26218 start.go:293] postStartSetup for "ha-959539-m03" (driver="kvm2")
	I0924 00:02:00.389059   26218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:02:00.389086   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.389316   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:02:00.389337   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.391543   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.391908   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.391935   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.392055   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.392242   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.392417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.392594   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.471592   26218 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:02:00.475678   26218 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:02:00.475711   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:02:00.475777   26218 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:02:00.475847   26218 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:02:00.475857   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:02:00.475939   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:02:00.485700   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:02:00.510312   26218 start.go:296] duration metric: took 121.25155ms for postStartSetup
	I0924 00:02:00.510378   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetConfigRaw
	I0924 00:02:00.511011   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:00.513590   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.513900   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.513916   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.514236   26218 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:02:00.514445   26218 start.go:128] duration metric: took 29.464359711s to createHost
	I0924 00:02:00.514478   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.517098   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.517491   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.517528   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.517742   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.517933   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.518100   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.518211   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.518412   26218 main.go:141] libmachine: Using SSH client type: native
	I0924 00:02:00.518622   26218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0924 00:02:00.518636   26218 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:02:00.621293   26218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136120.603612543
	
	I0924 00:02:00.621339   26218 fix.go:216] guest clock: 1727136120.603612543
	I0924 00:02:00.621351   26218 fix.go:229] Guest: 2024-09-24 00:02:00.603612543 +0000 UTC Remote: 2024-09-24 00:02:00.514464327 +0000 UTC m=+153.742409876 (delta=89.148216ms)
	I0924 00:02:00.621377   26218 fix.go:200] guest clock delta is within tolerance: 89.148216ms
	I0924 00:02:00.621387   26218 start.go:83] releasing machines lock for "ha-959539-m03", held for 29.571423777s
	I0924 00:02:00.621417   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.621673   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:00.624743   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.625239   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.625273   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.627860   26218 out.go:177] * Found network options:
	I0924 00:02:00.629759   26218 out.go:177]   - NO_PROXY=192.168.39.231,192.168.39.71
	W0924 00:02:00.631173   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 00:02:00.631197   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:02:00.631218   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.631908   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.632117   26218 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:02:00.632197   26218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:02:00.632234   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	W0924 00:02:00.632352   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	W0924 00:02:00.632378   26218 proxy.go:119] fail to check proxy env: Error ip not in block
	I0924 00:02:00.632447   26218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:02:00.632470   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:02:00.635213   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635463   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635655   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.635679   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635817   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.635945   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:00.635972   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:00.635973   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.636112   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:02:00.636177   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.636243   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:02:00.636375   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:02:00.636384   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.636482   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:02:00.872674   26218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:02:00.879244   26218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:02:00.879303   26218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:02:00.896008   26218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:02:00.896041   26218 start.go:495] detecting cgroup driver to use...
	I0924 00:02:00.896119   26218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:02:00.912126   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:02:00.928181   26218 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:02:00.928242   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:02:00.942640   26218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:02:00.957462   26218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:02:01.095902   26218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:02:01.244902   26218 docker.go:233] disabling docker service ...
	I0924 00:02:01.244972   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:02:01.260549   26218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:02:01.273803   26218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:02:01.412634   26218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:02:01.527287   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:02:01.541205   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:02:01.559624   26218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:02:01.559693   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.569832   26218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:02:01.569892   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.580172   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.590239   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.600013   26218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:02:01.610683   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.622051   26218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.639348   26218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:02:01.649043   26218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:02:01.659584   26218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:02:01.659633   26218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:02:01.673533   26218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:02:01.683341   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:01.799476   26218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:02:01.894369   26218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:02:01.894448   26218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:02:01.898980   26218 start.go:563] Will wait 60s for crictl version
	I0924 00:02:01.899028   26218 ssh_runner.go:195] Run: which crictl
	I0924 00:02:01.902610   26218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:02:01.942080   26218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:02:01.942167   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:02:01.973094   26218 ssh_runner.go:195] Run: crio --version
	I0924 00:02:02.006636   26218 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:02:02.008088   26218 out.go:177]   - env NO_PROXY=192.168.39.231
	I0924 00:02:02.009670   26218 out.go:177]   - env NO_PROXY=192.168.39.231,192.168.39.71
	I0924 00:02:02.011150   26218 main.go:141] libmachine: (ha-959539-m03) Calling .GetIP
	I0924 00:02:02.014303   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:02.014787   26218 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:02:02.014816   26218 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:02:02.015031   26218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:02:02.019245   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:02:02.031619   26218 mustload.go:65] Loading cluster: ha-959539
	I0924 00:02:02.031867   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:02.032216   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:02.032262   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:02.047774   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0924 00:02:02.048245   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:02.048817   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:02.048840   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:02.049178   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:02.049404   26218 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:02:02.051028   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:02:02.051346   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:02.051384   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:02.067177   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0924 00:02:02.067626   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:02.068120   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:02.068147   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:02.068561   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:02.068767   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:02:02.069023   26218 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.244
	I0924 00:02:02.069035   26218 certs.go:194] generating shared ca certs ...
	I0924 00:02:02.069051   26218 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.069225   26218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:02:02.069324   26218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:02:02.069337   26218 certs.go:256] generating profile certs ...
	I0924 00:02:02.069432   26218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:02:02.069461   26218 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e
	I0924 00:02:02.069482   26218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.244 192.168.39.254]
	I0924 00:02:02.200792   26218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e ...
	I0924 00:02:02.200824   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e: {Name:mk0815e5ce107bafe277776d87408434b1fc0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.200990   26218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e ...
	I0924 00:02:02.201002   26218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e: {Name:mk2b87933cd0413159c4371c2a1af112dc0ae1a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:02:02.201076   26218 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.bedc055e -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:02:02.201200   26218 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.bedc055e -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:02:02.201326   26218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:02:02.201341   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:02:02.201362   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:02:02.201373   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:02:02.201386   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:02:02.201398   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:02:02.201412   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:02:02.201424   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:02:02.216460   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:02:02.216561   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:02:02.216595   26218 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:02:02.216607   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:02:02.216644   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:02:02.216668   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:02:02.216690   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:02:02.216728   26218 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:02:02.216755   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.216774   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.216787   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.216818   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:02:02.220023   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:02.220522   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:02:02.220546   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:02.220674   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:02:02.220912   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:02:02.221115   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:02:02.221280   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:02:02.300781   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0924 00:02:02.306919   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0924 00:02:02.318700   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0924 00:02:02.322783   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0924 00:02:02.333789   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0924 00:02:02.337697   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0924 00:02:02.347574   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0924 00:02:02.351556   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0924 00:02:02.362821   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0924 00:02:02.367302   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0924 00:02:02.379143   26218 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0924 00:02:02.383718   26218 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0924 00:02:02.395777   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:02:02.422519   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:02:02.448222   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:02:02.473922   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:02:02.496975   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0924 00:02:02.519778   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:02:02.544839   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:02:02.567771   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:02:02.594776   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:02:02.622998   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:02:02.646945   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:02:02.670094   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0924 00:02:02.688636   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0924 00:02:02.706041   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0924 00:02:02.723591   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0924 00:02:02.740289   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0924 00:02:02.757088   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0924 00:02:02.774564   26218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0924 00:02:02.791730   26218 ssh_runner.go:195] Run: openssl version
	I0924 00:02:02.797731   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:02:02.810316   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.815033   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.815102   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:02:02.820784   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:02:02.831910   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:02:02.842883   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.847291   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.847354   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:02:02.852958   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:02:02.863626   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:02:02.874113   26218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.878537   26218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.878606   26218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:02:02.884346   26218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:02:02.896403   26218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:02:02.900556   26218 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:02:02.900623   26218 kubeadm.go:934] updating node {m03 192.168.39.244 8443 v1.31.1 crio true true} ...
	I0924 00:02:02.900726   26218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:02:02.900760   26218 kube-vip.go:115] generating kube-vip config ...
	I0924 00:02:02.900809   26218 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:02:02.915515   26218 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:02:02.915610   26218 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:02:02.915676   26218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:02:02.926273   26218 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 00:02:02.926342   26218 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 00:02:02.935889   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0924 00:02:02.935892   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0924 00:02:02.935939   26218 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 00:02:02.935957   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:02:02.935965   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:02:02.935958   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:02:02.936030   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 00:02:02.936043   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 00:02:02.951235   26218 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:02:02.951306   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0924 00:02:02.951337   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 00:02:02.951357   26218 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 00:02:02.951363   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0924 00:02:02.951385   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 00:02:02.982567   26218 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0924 00:02:02.982613   26218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 00:02:03.832975   26218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0924 00:02:03.844045   26218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 00:02:03.862702   26218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:02:03.880776   26218 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:02:03.898729   26218 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:02:03.902596   26218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:02:03.914924   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:04.053085   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:02:04.070074   26218 host.go:66] Checking if "ha-959539" exists ...
	I0924 00:02:04.070579   26218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:02:04.070643   26218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:02:04.087474   26218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0924 00:02:04.087999   26218 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:02:04.088599   26218 main.go:141] libmachine: Using API Version  1
	I0924 00:02:04.088620   26218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:02:04.089029   26218 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:02:04.089257   26218 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:02:04.089416   26218 start.go:317] joinCluster: &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:02:04.089542   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0924 00:02:04.089559   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:02:04.092876   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:04.093495   26218 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:02:04.093522   26218 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:02:04.093697   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:02:04.093959   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:02:04.094120   26218 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:02:04.094269   26218 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:02:04.268135   26218 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:02:04.268198   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4ctl0.w5qwixeo1tvb3095 --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443"
	I0924 00:02:27.863528   26218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4ctl0.w5qwixeo1tvb3095 --discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-959539-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443": (23.595296768s)
	I0924 00:02:27.863572   26218 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0924 00:02:28.487060   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-959539-m03 minikube.k8s.io/updated_at=2024_09_24T00_02_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=ha-959539 minikube.k8s.io/primary=false
	I0924 00:02:28.628940   26218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-959539-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0924 00:02:28.748648   26218 start.go:319] duration metric: took 24.659226615s to joinCluster
	I0924 00:02:28.748728   26218 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:02:28.749108   26218 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:02:28.750104   26218 out.go:177] * Verifying Kubernetes components...
	I0924 00:02:28.751646   26218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:02:29.019967   26218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:02:29.061460   26218 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:02:29.061682   26218 kapi.go:59] client config for ha-959539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0924 00:02:29.061736   26218 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.231:8443
	I0924 00:02:29.061979   26218 node_ready.go:35] waiting up to 6m0s for node "ha-959539-m03" to be "Ready" ...
	I0924 00:02:29.062051   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:29.062060   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:29.062068   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:29.062074   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:29.066072   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:29.562533   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:29.562554   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:29.562560   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:29.562570   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:29.567739   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:30.062212   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:30.062237   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:30.062245   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:30.062250   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:30.065711   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:30.562367   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:30.562402   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:30.562414   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:30.562419   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:30.565510   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:31.062523   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:31.062552   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:31.062564   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:31.062571   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:31.066499   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:31.067388   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:31.562731   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:31.562756   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:31.562771   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:31.562776   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:31.566512   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:32.062420   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:32.062441   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:32.062449   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:32.062454   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:32.065609   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:32.563014   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:32.563034   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:32.563042   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:32.563047   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:32.566443   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:33.062951   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:33.062980   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:33.062991   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:33.062996   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:33.067213   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:33.067831   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:33.562180   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:33.562210   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:33.562222   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:33.562229   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:33.565119   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:34.062360   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:34.062379   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:34.062387   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:34.062394   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:34.065867   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:34.562470   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:34.562494   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:34.562503   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:34.562508   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:34.566075   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:35.063097   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:35.063122   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:35.063133   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:35.063139   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:35.067536   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:35.068167   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:35.563171   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:35.563192   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:35.563200   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:35.563204   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:35.566347   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:36.062231   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:36.062252   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:36.062259   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:36.062263   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:36.068635   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:36.562318   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:36.562352   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:36.562360   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:36.562366   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:36.565945   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.062441   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:37.062465   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:37.062473   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:37.062477   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:37.065788   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.562611   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:37.562633   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:37.562641   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:37.562646   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:37.565850   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:37.566272   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:38.062661   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:38.062683   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:38.062691   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:38.062696   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:38.066483   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:38.562638   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:38.562660   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:38.562667   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:38.562671   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:38.566169   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.062729   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:39.062750   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:39.062759   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:39.062763   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:39.066557   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.562877   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:39.562899   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:39.562907   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:39.562912   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:39.566233   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:39.566763   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:40.063206   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:40.063226   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:40.063234   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:40.063239   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:40.066817   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:40.562132   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:40.562155   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:40.562165   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:40.562173   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:40.565811   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:41.062663   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:41.062683   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:41.062692   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:41.062696   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:41.066042   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:41.563040   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:41.563066   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:41.563078   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:41.563084   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:41.566187   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:42.063050   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:42.063071   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:42.063079   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:42.063082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:42.066449   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:42.067262   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:42.563040   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:42.563066   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:42.563077   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:42.563082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:42.566476   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:43.062431   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:43.062452   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:43.062458   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:43.062461   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:43.065607   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:43.563123   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:43.563144   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:43.563152   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:43.563155   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:43.566312   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.062448   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:44.062472   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:44.062480   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:44.062484   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:44.065777   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.562484   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:44.562506   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:44.562518   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:44.562527   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:44.565803   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:44.566407   26218 node_ready.go:53] node "ha-959539-m03" has status "Ready":"False"
	I0924 00:02:45.062747   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.062780   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.062787   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.062792   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.066101   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.562696   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.562717   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.562726   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.562732   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.566877   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:45.567306   26218 node_ready.go:49] node "ha-959539-m03" has status "Ready":"True"
	I0924 00:02:45.567324   26218 node_ready.go:38] duration metric: took 16.505330859s for node "ha-959539-m03" to be "Ready" ...
	I0924 00:02:45.567334   26218 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:02:45.567399   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:45.567411   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.567421   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.567435   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.576236   26218 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0924 00:02:45.582315   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.582415   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nkbzw
	I0924 00:02:45.582426   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.582437   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.582444   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.586563   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:45.587529   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.587551   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.587561   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.587566   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.590549   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.591073   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.591094   26218 pod_ready.go:82] duration metric: took 8.751789ms for pod "coredns-7c65d6cfc9-nkbzw" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.591106   26218 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.591177   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ss8lg
	I0924 00:02:45.591186   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.591196   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.591204   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.594507   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.595092   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.595107   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.595115   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.595119   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.597906   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.598405   26218 pod_ready.go:93] pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.598421   26218 pod_ready.go:82] duration metric: took 7.307084ms for pod "coredns-7c65d6cfc9-ss8lg" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.598432   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.598497   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539
	I0924 00:02:45.598508   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.598517   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.598534   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.601102   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.601629   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:45.601643   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.601652   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.601657   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.604411   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.604921   26218 pod_ready.go:93] pod "etcd-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.604936   26218 pod_ready.go:82] duration metric: took 6.498124ms for pod "etcd-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.604943   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.604986   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m02
	I0924 00:02:45.604994   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.605000   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.605003   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.607711   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.608182   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:45.608195   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.608202   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.608205   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.611102   26218 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0924 00:02:45.611468   26218 pod_ready.go:93] pod "etcd-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.611482   26218 pod_ready.go:82] duration metric: took 6.534228ms for pod "etcd-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.611489   26218 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.762986   26218 request.go:632] Waited for 151.426917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m03
	I0924 00:02:45.763060   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/etcd-ha-959539-m03
	I0924 00:02:45.763072   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.763082   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.763093   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.768790   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:45.963102   26218 request.go:632] Waited for 193.344337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.963164   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:45.963169   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:45.963175   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:45.963178   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:45.966765   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:45.967332   26218 pod_ready.go:93] pod "etcd-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:45.967348   26218 pod_ready.go:82] duration metric: took 355.853201ms for pod "etcd-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:45.967370   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.162735   26218 request.go:632] Waited for 195.29099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:02:46.162798   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539
	I0924 00:02:46.162806   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.162816   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.162825   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.166290   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.363412   26218 request.go:632] Waited for 196.338649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:46.363479   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:46.363488   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.363500   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.363522   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.368828   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:46.369452   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:46.369475   26218 pod_ready.go:82] duration metric: took 402.09395ms for pod "kube-apiserver-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.369488   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.563510   26218 request.go:632] Waited for 193.954572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:02:46.563593   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m02
	I0924 00:02:46.563601   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.563612   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.563620   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.567229   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.763581   26218 request.go:632] Waited for 195.391711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:46.763651   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:46.763658   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.763669   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.763676   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.766915   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:46.767439   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:46.767461   26218 pod_ready.go:82] duration metric: took 397.964383ms for pod "kube-apiserver-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.767475   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:46.963610   26218 request.go:632] Waited for 196.063114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m03
	I0924 00:02:46.963694   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-959539-m03
	I0924 00:02:46.963703   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:46.963712   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:46.963719   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:46.967275   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.162752   26218 request.go:632] Waited for 194.876064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:47.162830   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:47.162838   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.162844   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.162847   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.166156   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.166699   26218 pod_ready.go:93] pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.166716   26218 pod_ready.go:82] duration metric: took 399.234813ms for pod "kube-apiserver-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.166725   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.362729   26218 request.go:632] Waited for 195.941337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:02:47.362789   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539
	I0924 00:02:47.362795   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.362802   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.362806   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.365942   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.562904   26218 request.go:632] Waited for 196.303098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:47.562966   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:47.562973   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.562982   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.562987   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.566192   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.566827   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.566845   26218 pod_ready.go:82] duration metric: took 400.114045ms for pod "kube-controller-manager-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.566855   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.762958   26218 request.go:632] Waited for 196.048732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:02:47.763034   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m02
	I0924 00:02:47.763042   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.763049   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.763058   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.766336   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.963363   26218 request.go:632] Waited for 196.287822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:47.963455   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:47.963462   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:47.963470   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:47.963474   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:47.967146   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:47.967827   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:47.967850   26218 pod_ready.go:82] duration metric: took 400.989142ms for pod "kube-controller-manager-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:47.967860   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.162800   26218 request.go:632] Waited for 194.858732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m03
	I0924 00:02:48.162862   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-959539-m03
	I0924 00:02:48.162869   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.162880   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.162886   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.166955   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:48.362915   26218 request.go:632] Waited for 195.291486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:48.363004   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:48.363015   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.363023   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.363027   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.366536   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:48.367263   26218 pod_ready.go:93] pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:48.367282   26218 pod_ready.go:82] duration metric: took 399.415546ms for pod "kube-controller-manager-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.367292   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.563765   26218 request.go:632] Waited for 196.416841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:02:48.563839   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hlqx
	I0924 00:02:48.563844   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.563852   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.563858   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.567525   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:48.763756   26218 request.go:632] Waited for 195.286657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:48.763808   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:48.763813   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.763823   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.763827   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.768008   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:48.768461   26218 pod_ready.go:93] pod "kube-proxy-2hlqx" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:48.768523   26218 pod_ready.go:82] duration metric: took 401.181266ms for pod "kube-proxy-2hlqx" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.768542   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b82ch" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:48.963586   26218 request.go:632] Waited for 194.968745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b82ch
	I0924 00:02:48.963672   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b82ch
	I0924 00:02:48.963682   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:48.963698   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:48.963706   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:48.967156   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.163098   26218 request.go:632] Waited for 195.427645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:49.163160   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:49.163165   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.163172   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.163175   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.168664   26218 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0924 00:02:49.169191   26218 pod_ready.go:93] pod "kube-proxy-b82ch" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.169212   26218 pod_ready.go:82] duration metric: took 400.661599ms for pod "kube-proxy-b82ch" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.169224   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.363274   26218 request.go:632] Waited for 193.975466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:02:49.363332   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qzklc
	I0924 00:02:49.363337   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.363345   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.363348   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.367061   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.563180   26218 request.go:632] Waited for 195.372048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.563241   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.563246   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.563253   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.563260   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.566761   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.567465   26218 pod_ready.go:93] pod "kube-proxy-qzklc" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.567481   26218 pod_ready.go:82] duration metric: took 398.249897ms for pod "kube-proxy-qzklc" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.567490   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.763615   26218 request.go:632] Waited for 196.0486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:02:49.763668   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539
	I0924 00:02:49.763673   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.763681   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.763685   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.767108   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.963188   26218 request.go:632] Waited for 195.362713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.963255   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539
	I0924 00:02:49.963261   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:49.963268   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:49.963273   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:49.966872   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:49.967707   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:49.967726   26218 pod_ready.go:82] duration metric: took 400.230299ms for pod "kube-scheduler-ha-959539" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:49.967774   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.163358   26218 request.go:632] Waited for 195.519311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:02:50.163411   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m02
	I0924 00:02:50.163416   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.163424   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.163428   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.167399   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.363362   26218 request.go:632] Waited for 195.429658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:50.363431   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m02
	I0924 00:02:50.363438   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.363448   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.363453   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.366812   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.367292   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:50.367315   26218 pod_ready.go:82] duration metric: took 399.528577ms for pod "kube-scheduler-ha-959539-m02" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.367328   26218 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.563431   26218 request.go:632] Waited for 196.035117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m03
	I0924 00:02:50.563517   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-959539-m03
	I0924 00:02:50.563525   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.563533   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.563536   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.567039   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.763077   26218 request.go:632] Waited for 195.355137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:50.763142   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes/ha-959539-m03
	I0924 00:02:50.763148   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.763155   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.763160   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.766779   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:50.767385   26218 pod_ready.go:93] pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace has status "Ready":"True"
	I0924 00:02:50.767402   26218 pod_ready.go:82] duration metric: took 400.066903ms for pod "kube-scheduler-ha-959539-m03" in "kube-system" namespace to be "Ready" ...
	I0924 00:02:50.767413   26218 pod_ready.go:39] duration metric: took 5.200066315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:02:50.767425   26218 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:02:50.767482   26218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:02:50.783606   26218 api_server.go:72] duration metric: took 22.034845457s to wait for apiserver process to appear ...
	I0924 00:02:50.783631   26218 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:02:50.783650   26218 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I0924 00:02:50.788103   26218 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I0924 00:02:50.788220   26218 round_trippers.go:463] GET https://192.168.39.231:8443/version
	I0924 00:02:50.788231   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.788241   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.788247   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.789134   26218 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0924 00:02:50.789199   26218 api_server.go:141] control plane version: v1.31.1
	I0924 00:02:50.789217   26218 api_server.go:131] duration metric: took 5.578933ms to wait for apiserver health ...
	I0924 00:02:50.789227   26218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:02:50.963536   26218 request.go:632] Waited for 174.232731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:50.963617   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:50.963624   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:50.963635   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:50.963649   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:50.969906   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:50.976880   26218 system_pods.go:59] 24 kube-system pods found
	I0924 00:02:50.976914   26218 system_pods.go:61] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:02:50.976919   26218 system_pods.go:61] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:02:50.976923   26218 system_pods.go:61] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:02:50.976928   26218 system_pods.go:61] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:02:50.976933   26218 system_pods.go:61] "etcd-ha-959539-m03" [a71adb46-5bbc-43ce-8ef0-2b03bf75da03] Running
	I0924 00:02:50.976938   26218 system_pods.go:61] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:02:50.976943   26218 system_pods.go:61] "kindnet-g4nkw" [32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f] Running
	I0924 00:02:50.976948   26218 system_pods.go:61] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:02:50.976953   26218 system_pods.go:61] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:02:50.976958   26218 system_pods.go:61] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:02:50.976968   26218 system_pods.go:61] "kube-apiserver-ha-959539-m03" [7a54eb39-3ff9-4eb8-a5df-4333e1416899] Running
	I0924 00:02:50.976977   26218 system_pods.go:61] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:02:50.976985   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:02:50.976991   26218 system_pods.go:61] "kube-controller-manager-ha-959539-m03" [bc95be18-c320-4981-8155-18432f08883e] Running
	I0924 00:02:50.976999   26218 system_pods.go:61] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:02:50.977007   26218 system_pods.go:61] "kube-proxy-b82ch" [5bf376fc-8dbe-4817-874c-506f5dc4d2e7] Running
	I0924 00:02:50.977015   26218 system_pods.go:61] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:02:50.977020   26218 system_pods.go:61] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:02:50.977027   26218 system_pods.go:61] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:02:50.977031   26218 system_pods.go:61] "kube-scheduler-ha-959539-m03" [e39eb1d7-90f3-4af9-9356-45ae9c23828d] Running
	I0924 00:02:50.977036   26218 system_pods.go:61] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:02:50.977044   26218 system_pods.go:61] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:02:50.977049   26218 system_pods.go:61] "kube-vip-ha-959539-m03" [3c5fd7f2-aec4-42d8-9331-ba59a4d76539] Running
	I0924 00:02:50.977058   26218 system_pods.go:61] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:02:50.977069   26218 system_pods.go:74] duration metric: took 187.832664ms to wait for pod list to return data ...
	I0924 00:02:50.977080   26218 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:02:51.162900   26218 request.go:632] Waited for 185.733558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:02:51.162976   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/default/serviceaccounts
	I0924 00:02:51.162988   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.162995   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.163003   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.166765   26218 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0924 00:02:51.166900   26218 default_sa.go:45] found service account: "default"
	I0924 00:02:51.166916   26218 default_sa.go:55] duration metric: took 189.8293ms for default service account to be created ...
	I0924 00:02:51.166927   26218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:02:51.363374   26218 request.go:632] Waited for 196.378603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:51.363436   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/namespaces/kube-system/pods
	I0924 00:02:51.363443   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.363453   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.363458   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.370348   26218 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0924 00:02:51.376926   26218 system_pods.go:86] 24 kube-system pods found
	I0924 00:02:51.376957   26218 system_pods.go:89] "coredns-7c65d6cfc9-nkbzw" [79bbcdf6-3ae9-4c2f-9d73-a990a069864f] Running
	I0924 00:02:51.376966   26218 system_pods.go:89] "coredns-7c65d6cfc9-ss8lg" [37bd392b-d364-4a64-8fa0-852bb245aedc] Running
	I0924 00:02:51.376972   26218 system_pods.go:89] "etcd-ha-959539" [ff55eab1-1a4f-4adf-85c4-1ed8fa3ad1ec] Running
	I0924 00:02:51.376977   26218 system_pods.go:89] "etcd-ha-959539-m02" [c2dcc425-5c60-4865-9b78-1f2352fd1729] Running
	I0924 00:02:51.376984   26218 system_pods.go:89] "etcd-ha-959539-m03" [a71adb46-5bbc-43ce-8ef0-2b03bf75da03] Running
	I0924 00:02:51.376989   26218 system_pods.go:89] "kindnet-cbrj7" [ad74ea31-a1ca-4632-b960-45e6de0fc117] Running
	I0924 00:02:51.376994   26218 system_pods.go:89] "kindnet-g4nkw" [32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f] Running
	I0924 00:02:51.377000   26218 system_pods.go:89] "kindnet-qlqss" [365f0414-b74d-42a8-be37-b0c8e03291ac] Running
	I0924 00:02:51.377006   26218 system_pods.go:89] "kube-apiserver-ha-959539" [2e15b758-6534-4b13-be16-42a2fd437b69] Running
	I0924 00:02:51.377012   26218 system_pods.go:89] "kube-apiserver-ha-959539-m02" [0ea9778e-f241-4c0d-9ea7-7e87bd667e10] Running
	I0924 00:02:51.377018   26218 system_pods.go:89] "kube-apiserver-ha-959539-m03" [7a54eb39-3ff9-4eb8-a5df-4333e1416899] Running
	I0924 00:02:51.377026   26218 system_pods.go:89] "kube-controller-manager-ha-959539" [b7da7091-f063-4f1a-bd0b-9f7136cd64a0] Running
	I0924 00:02:51.377036   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m02" [29421b14-f01c-42dc-8c7d-b80cb32b9b7c] Running
	I0924 00:02:51.377042   26218 system_pods.go:89] "kube-controller-manager-ha-959539-m03" [bc95be18-c320-4981-8155-18432f08883e] Running
	I0924 00:02:51.377051   26218 system_pods.go:89] "kube-proxy-2hlqx" [c8e003fb-d3d0-425f-bc83-55122ed658ce] Running
	I0924 00:02:51.377057   26218 system_pods.go:89] "kube-proxy-b82ch" [5bf376fc-8dbe-4817-874c-506f5dc4d2e7] Running
	I0924 00:02:51.377066   26218 system_pods.go:89] "kube-proxy-qzklc" [19af917f-9661-4577-92ed-8fc44b573c64] Running
	I0924 00:02:51.377072   26218 system_pods.go:89] "kube-scheduler-ha-959539" [25a457b1-578e-4e53-8201-e99c001d80bd] Running
	I0924 00:02:51.377080   26218 system_pods.go:89] "kube-scheduler-ha-959539-m02" [716521cc-aa0c-4507-97e5-126dccc95359] Running
	I0924 00:02:51.377086   26218 system_pods.go:89] "kube-scheduler-ha-959539-m03" [e39eb1d7-90f3-4af9-9356-45ae9c23828d] Running
	I0924 00:02:51.377094   26218 system_pods.go:89] "kube-vip-ha-959539" [f80705df-80fe-48f0-a65c-b4e414523bdf] Running
	I0924 00:02:51.377100   26218 system_pods.go:89] "kube-vip-ha-959539-m02" [6d055131-a622-4398-8f2f-0146b867e8f8] Running
	I0924 00:02:51.377105   26218 system_pods.go:89] "kube-vip-ha-959539-m03" [3c5fd7f2-aec4-42d8-9331-ba59a4d76539] Running
	I0924 00:02:51.377111   26218 system_pods.go:89] "storage-provisioner" [3b7e0f07-8db9-4473-b3d2-c245c19d655b] Running
	I0924 00:02:51.377123   26218 system_pods.go:126] duration metric: took 210.186327ms to wait for k8s-apps to be running ...
	I0924 00:02:51.377134   26218 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:02:51.377189   26218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:02:51.392588   26218 system_svc.go:56] duration metric: took 15.444721ms WaitForService to wait for kubelet
	I0924 00:02:51.392618   26218 kubeadm.go:582] duration metric: took 22.64385975s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:02:51.392638   26218 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:02:51.563072   26218 request.go:632] Waited for 170.361096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.231:8443/api/v1/nodes
	I0924 00:02:51.563121   26218 round_trippers.go:463] GET https://192.168.39.231:8443/api/v1/nodes
	I0924 00:02:51.563126   26218 round_trippers.go:469] Request Headers:
	I0924 00:02:51.563134   26218 round_trippers.go:473]     Accept: application/json, */*
	I0924 00:02:51.563139   26218 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0924 00:02:51.567517   26218 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0924 00:02:51.569246   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569269   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569282   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569287   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569293   26218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:02:51.569298   26218 node_conditions.go:123] node cpu capacity is 2
	I0924 00:02:51.569305   26218 node_conditions.go:105] duration metric: took 176.660035ms to run NodePressure ...
	I0924 00:02:51.569328   26218 start.go:241] waiting for startup goroutines ...
	I0924 00:02:51.569355   26218 start.go:255] writing updated cluster config ...
	I0924 00:02:51.569656   26218 ssh_runner.go:195] Run: rm -f paused
	I0924 00:02:51.621645   26218 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 00:02:51.623613   26218 out.go:177] * Done! kubectl is now configured to use "ha-959539" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.127480728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136405127453599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f2c573a-b75d-4744-9b78-6f8e256cc7c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.128146002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b2126f1-9f27-4c71-89f3-2a016f3bbd12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.128226793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b2126f1-9f27-4c71-89f3-2a016f3bbd12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.128546436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b2126f1-9f27-4c71-89f3-2a016f3bbd12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.165746204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b6313ad-9b55-4115-984c-04fc492321b8 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.165836813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b6313ad-9b55-4115-984c-04fc492321b8 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.167155204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f1ac145-c864-4ffe-abef-6f77e4144544 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.167715383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136405167687848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f1ac145-c864-4ffe-abef-6f77e4144544 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.168428764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6c63e1c-b421-4ba5-ad8c-ffd5a561a1c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.168521607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6c63e1c-b421-4ba5-ad8c-ffd5a561a1c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.168795621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6c63e1c-b421-4ba5-ad8c-ffd5a561a1c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.206138482Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c3952e1-8365-4d6c-b640-2428e2b4be56 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.206253100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c3952e1-8365-4d6c-b640-2428e2b4be56 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.210091060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c4933ed-eb06-4ecd-9cfd-0ac47aa61196 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.210646653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136405210622641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c4933ed-eb06-4ecd-9cfd-0ac47aa61196 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.211251945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09904b05-b804-478e-8162-d29de6cbf26d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.211464831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09904b05-b804-478e-8162-d29de6cbf26d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.211940360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09904b05-b804-478e-8162-d29de6cbf26d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.251399021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64d7a430-d6ac-457f-b9bb-22aaf1801c68 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.251487704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64d7a430-d6ac-457f-b9bb-22aaf1801c68 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.252415334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfdfd1ca-2245-44eb-95b6-21e32dadb9bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.252828647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136405252806205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfdfd1ca-2245-44eb-95b6-21e32dadb9bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.253417132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7b3c549-1076-4316-805d-73cea8b6a38e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.253491136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7b3c549-1076-4316-805d-73cea8b6a38e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:06:45 ha-959539 crio[665]: time="2024-09-24 00:06:45.253727930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136176666029632,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026589850568,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136026542529982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb114bb7775dcb227b0e90d5b566479bcd948dc40610c14af59f316412ffabf,PodSandboxId:2ffb51384d9a50b5162ea3a6190770d5887aab9dcc4b470a8939a98ed67ffa04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727136026450686637,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17271360
14417430026,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136014134599532,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c,PodSandboxId:f6a8ccad216f1ff4f82acffd07977d426ef7ac36b9dad5f0989e477a11e66cf9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136010027927828,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f69ffc952d0f295da88120340eae744e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136003255288728,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974,PodSandboxId:c7d97a67f80f61d1406488dc953f78d225b73ace23d35142119dcf053114c4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136003229309223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136003245707453,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288,PodSandboxId:7cdc58cf999c2a31d524cddeb690c57a3ba05b2201b109b586df23e0662a6c48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136003136808561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7b3c549-1076-4316-805d-73cea8b6a38e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ae8646f943f6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4b5dbf2a21893       busybox-7dff88458-7q7xr
	05d43a4d13300       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a91a16106518a       coredns-7c65d6cfc9-nkbzw
	e7a1a19a83d49       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   1a4ee0160fc1d       coredns-7c65d6cfc9-ss8lg
	2eb114bb7775d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2ffb51384d9a5       storage-provisioner
	1596300e66cf2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   1a380d0471083       kindnet-qlqss
	cdf912809c47a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   72ade1a051045       kube-proxy-qzklc
	b61587cd3ccea       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   f6a8ccad216f1       kube-vip-ha-959539
	d5459f3bc533d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   40d143641822b       etcd-ha-959539
	af224d12661c4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   7328f59cdb993       kube-scheduler-ha-959539
	a42356ed739fd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c7d97a67f80f6       kube-controller-manager-ha-959539
	8c911375acec9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   7cdc58cf999c2       kube-apiserver-ha-959539
	
	
	==> coredns [05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137] <==
	[INFO] 10.244.0.4:50134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005141674s
	[INFO] 10.244.1.2:43867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223991s
	[INFO] 10.244.1.2:35996 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000101615s
	[INFO] 10.244.2.2:54425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224645s
	[INFO] 10.244.2.2:58169 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.00170508s
	[INFO] 10.244.0.4:55776 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107033s
	[INFO] 10.244.0.4:58501 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017716872s
	[INFO] 10.244.0.4:37973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002021s
	[INFO] 10.244.0.4:43904 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156858s
	[INFO] 10.244.0.4:48352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163626s
	[INFO] 10.244.1.2:52896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132298s
	[INFO] 10.244.1.2:45449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227639s
	[INFO] 10.244.1.2:47616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017286s
	[INFO] 10.244.1.2:33521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108761s
	[INFO] 10.244.1.2:43587 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012987s
	[INFO] 10.244.2.2:52394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001362s
	[INFO] 10.244.2.2:43819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119859s
	[INFO] 10.244.2.2:35291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097457s
	[INFO] 10.244.2.2:56966 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168721s
	[INFO] 10.244.0.4:52779 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102739s
	[INFO] 10.244.2.2:59382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262295s
	[INFO] 10.244.2.2:44447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133384s
	[INFO] 10.244.2.2:52951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170462s
	[INFO] 10.244.2.2:46956 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215226s
	[INFO] 10.244.2.2:53703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108727s
	
	
	==> coredns [e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0] <==
	[INFO] 10.244.1.2:36104 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002245521s
	[INFO] 10.244.1.2:41962 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001624615s
	[INFO] 10.244.1.2:36352 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142132s
	[INFO] 10.244.2.2:54238 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001909893s
	[INFO] 10.244.2.2:38238 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165226s
	[INFO] 10.244.2.2:40250 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00173003s
	[INFO] 10.244.2.2:53405 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126728s
	[INFO] 10.244.0.4:46344 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157852s
	[INFO] 10.244.0.4:57359 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065958s
	[INFO] 10.244.0.4:43743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119977s
	[INFO] 10.244.1.2:32867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192169s
	[INFO] 10.244.1.2:43403 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167697s
	[INFO] 10.244.1.2:57243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095722s
	[INFO] 10.244.1.2:48326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119715s
	[INFO] 10.244.2.2:49664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122596s
	[INFO] 10.244.2.2:40943 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106169s
	[INFO] 10.244.0.4:36066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121758s
	[INFO] 10.244.0.4:51023 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156225s
	[INFO] 10.244.0.4:56715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125631s
	[INFO] 10.244.0.4:47944 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103261s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148466s
	[INFO] 10.244.1.2:54979 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116145s
	[INFO] 10.244.1.2:47442 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097064s
	[INFO] 10.244.1.2:38143 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188037s
	[INFO] 10.244.2.2:40107 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086602s
	
	
	==> describe nodes <==
	Name:               ha-959539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:00:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:03:16 +0000   Tue, 24 Sep 2024 00:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-959539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a4b9ce5eed94a13bdbc682549e1dd1e
	  System UUID:                0a4b9ce5-eed9-4a13-bdbc-682549e1dd1e
	  Boot ID:                    679e0a2b-8772-4f6d-9e47-ba8190727387
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7q7xr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-nkbzw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 coredns-7c65d6cfc9-ss8lg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 etcd-ha-959539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m33s
	  kube-system                 kindnet-qlqss                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-apiserver-ha-959539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-959539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-proxy-qzklc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-scheduler-ha-959539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-959539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m30s  kube-proxy       
	  Normal  RegisteredNode           6m33s  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal  Starting                 6m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s  kubelet          Node ha-959539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s  kubelet          Node ha-959539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s  kubelet          Node ha-959539 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m20s  kubelet          Node ha-959539 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal  RegisteredNode           4m12s  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	
	
	Name:               ha-959539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:01:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:04:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 00:03:07 +0000   Tue, 24 Sep 2024 00:04:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    ha-959539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f78cfc70aad42d195f1884fe3a82e21
	  System UUID:                0f78cfc7-0aad-42d1-95f1-884fe3a82e21
	  Boot ID:                    247da00b-9587-4de7-aa45-9671f65dd14e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m5qhr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-959539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m40s
	  kube-system                 kindnet-cbrj7                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m40s
	  kube-system                 kube-apiserver-ha-959539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-ha-959539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-2hlqx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-scheduler-ha-959539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-vip-ha-959539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m41s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m41s)  kubelet          Node ha-959539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m41s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  NodeNotReady             117s                   node-controller  Node ha-959539-m02 status is now: NodeNotReady
	
	
	Name:               ha-959539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_02_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:03:26 +0000   Tue, 24 Sep 2024 00:02:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-959539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e393f2c1cce4055aaf3b67371deff0b
	  System UUID:                7e393f2c-1cce-4055-aaf3-b67371deff0b
	  Boot ID:                    d3fa2681-c8c7-4049-92ed-f71eeaa56616
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9v6l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-959539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kindnet-g4nkw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-959539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-ha-959539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-b82ch                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-959539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-959539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-959539-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	
	
	Name:               ha-959539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_03_32_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:03:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:06:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:04:02 +0000   Tue, 24 Sep 2024 00:03:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-959539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55d6e549bf6d4455bd4db681e2cc17b8
	  System UUID:                55d6e549-bf6d-4455-bd4d-b681e2cc17b8
	  Boot ID:                    0f7b628e-f628-48c1-aab1-6401b3cfb87c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54xw8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-8h8qr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-959539-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-959539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 23:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051430] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037836] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.729802] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.844348] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.545165] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.336873] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.055717] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062835] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.175047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141488] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281309] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.886660] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[Sep24 00:00] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.061155] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.064379] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.136832] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +2.892614] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.264409] kauditd_printk_skb: 15 callbacks suppressed
	[Sep24 00:01] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2] <==
	{"level":"warn","ts":"2024-09-24T00:06:45.513928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.521243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.524933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.532920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.538776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.546882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.548089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.551823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.555527Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.562833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.568984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.575431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.579768Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.583053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.590094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.595222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.601100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.604485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.607553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.611276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.617238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.622678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.646509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6a82bbfd8eee2a80","from":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-24T00:06:45.669959Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ef1fdfe9aeaf9502","rtt":"8.649629ms","error":"dial tcp 192.168.39.71:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-24T00:06:45.670111Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ef1fdfe9aeaf9502","rtt":"745.941µs","error":"dial tcp 192.168.39.71:2380: connect: no route to host"}
	
	
	==> kernel <==
	 00:06:45 up 7 min,  0 users,  load average: 0.46, 0.25, 0.11
	Linux ha-959539 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2] <==
	I0924 00:06:15.413714       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:25.420493       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:25.420595       1 main.go:299] handling current node
	I0924 00:06:25.420622       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:25.420640       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:25.420821       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:25.420897       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:25.420983       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:25.421005       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:35.421247       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:35.421291       1 main.go:299] handling current node
	I0924 00:06:35.421322       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:35.421373       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:35.421530       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:35.421553       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:35.421602       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:35.421608       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:45.413287       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:06:45.413317       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:06:45.413474       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:06:45.413493       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:06:45.413557       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:06:45.413576       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:06:45.413617       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:06:45.413624       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288] <==
	I0924 00:00:07.916652       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 00:00:12.613775       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 00:00:12.673306       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0924 00:00:12.714278       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 00:00:13.518109       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 00:00:13.589977       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0924 00:02:25.922866       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/kube-system/events" auditID="9c890d06-5a2f-40bc-b52e-84153e1ff033"
	E0924 00:02:25.923053       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.218µs" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0924 00:02:25.923547       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 800.044µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0924 00:02:57.928651       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42468: use of closed network connection
	E0924 00:02:58.108585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42478: use of closed network connection
	E0924 00:02:58.286933       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42500: use of closed network connection
	E0924 00:02:58.488672       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42526: use of closed network connection
	E0924 00:02:58.667114       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42542: use of closed network connection
	E0924 00:02:58.850942       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42560: use of closed network connection
	E0924 00:02:59.040828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42576: use of closed network connection
	E0924 00:02:59.220980       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42590: use of closed network connection
	E0924 00:02:59.394600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42608: use of closed network connection
	E0924 00:02:59.676143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42636: use of closed network connection
	E0924 00:02:59.860764       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42646: use of closed network connection
	E0924 00:03:00.047956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42676: use of closed network connection
	E0924 00:03:00.214607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42700: use of closed network connection
	E0924 00:03:00.390729       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42708: use of closed network connection
	E0924 00:03:00.581800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42734: use of closed network connection
	W0924 00:04:17.715664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.244]
	
	
	==> kube-controller-manager [a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974] <==
	I0924 00:03:31.919493       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-959539-m04" podCIDRs=["10.244.3.0/24"]
	I0924 00:03:31.919545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:31.919581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:31.939956       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:32.140223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:32.547615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.004678       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-959539-m04"
	I0924 00:03:33.023454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.163542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.196770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.276017       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:33.293134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:42.271059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:52.595797       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	I0924 00:03:52.595900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:52.614607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:03:53.023412       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:04:02.710901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:04:48.048138       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	I0924 00:04:48.048400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:48.078576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:48.166696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.971716ms"
	I0924 00:04:48.166889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.521µs"
	I0924 00:04:48.406838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:04:53.246642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	
	
	==> kube-proxy [cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:00:14.873543       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:00:14.915849       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.231"]
	E0924 00:00:14.916021       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:00:14.966031       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:00:14.966075       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:00:14.966099       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:00:14.979823       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:00:14.980813       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:00:14.980842       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:00:14.989078       1 config.go:199] "Starting service config controller"
	I0924 00:00:14.990228       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:00:14.990251       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:00:14.993409       1 config.go:328] "Starting node config controller"
	I0924 00:00:14.993460       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:00:14.993657       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:00:15.090975       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:00:15.094378       1 shared_informer.go:320] Caches are synced for node config
	I0924 00:00:15.094379       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd] <==
	E0924 00:00:07.294311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:00:07.525201       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:00:07.525260       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 00:00:10.263814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 00:02:25.214912       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-g4nkw\": pod kindnet-g4nkw is already assigned to node \"ha-959539-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-g4nkw" node="ha-959539-m03"
	E0924 00:02:25.215083       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-g4nkw\": pod kindnet-g4nkw is already assigned to node \"ha-959539-m03\"" pod="kube-system/kindnet-g4nkw"
	E0924 00:02:25.219021       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-b82ch\": pod kube-proxy-b82ch is already assigned to node \"ha-959539-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-b82ch" node="ha-959539-m03"
	E0924 00:02:25.222512       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5bf376fc-8dbe-4817-874c-506f5dc4d2e7(kube-system/kube-proxy-b82ch) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-b82ch"
	E0924 00:02:25.222635       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-b82ch\": pod kube-proxy-b82ch is already assigned to node \"ha-959539-m03\"" pod="kube-system/kube-proxy-b82ch"
	I0924 00:02:25.222722       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-b82ch" node="ha-959539-m03"
	E0924 00:02:26.361885       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f(kube-system/kindnet-g4nkw) is in the cache, so can't be assumed" pod="kube-system/kindnet-g4nkw"
	E0924 00:02:26.362043       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 32f2f545-b1a1-4f2b-8ee7-7fdb6409bc5f(kube-system/kindnet-g4nkw) is in the cache, so can't be assumed" pod="kube-system/kindnet-g4nkw"
	I0924 00:02:26.362147       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-g4nkw" node="ha-959539-m03"
	E0924 00:02:52.586244       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-m5qhr\": pod busybox-7dff88458-m5qhr is already assigned to node \"ha-959539-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-m5qhr" node="ha-959539-m02"
	E0924 00:02:52.586487       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-m5qhr\": pod busybox-7dff88458-m5qhr is already assigned to node \"ha-959539-m02\"" pod="default/busybox-7dff88458-m5qhr"
	E0924 00:02:52.609367       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7q7xr\": pod busybox-7dff88458-7q7xr is already assigned to node \"ha-959539\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-7q7xr" node="ha-959539"
	E0924 00:02:52.609752       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a(default/busybox-7dff88458-7q7xr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-7q7xr"
	E0924 00:02:52.609813       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-7q7xr\": pod busybox-7dff88458-7q7xr is already assigned to node \"ha-959539\"" pod="default/busybox-7dff88458-7q7xr"
	I0924 00:02:52.609856       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-7q7xr" node="ha-959539"
	E0924 00:03:31.974702       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:31.975081       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9594238c-336e-479f-8424-bf5663475f7d(kube-system/kube-proxy-h87p2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h87p2"
	E0924 00:03:31.975198       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" pod="kube-system/kube-proxy-h87p2"
	I0924 00:03:31.975297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:32.025106       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zfglg" node="ha-959539-m04"
	E0924 00:03:32.025246       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" pod="kube-system/kindnet-zfglg"
	
	
	==> kubelet <==
	Sep 24 00:05:12 ha-959539 kubelet[1310]: E0924 00:05:12.631688    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136312631299697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:12 ha-959539 kubelet[1310]: E0924 00:05:12.631721    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136312631299697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:22 ha-959539 kubelet[1310]: E0924 00:05:22.633953    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136322633526599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:22 ha-959539 kubelet[1310]: E0924 00:05:22.634395    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136322633526599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:32 ha-959539 kubelet[1310]: E0924 00:05:32.636027    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136332635686531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:32 ha-959539 kubelet[1310]: E0924 00:05:32.636067    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136332635686531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:42 ha-959539 kubelet[1310]: E0924 00:05:42.638244    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136342637928063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:42 ha-959539 kubelet[1310]: E0924 00:05:42.638707    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136342637928063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:52 ha-959539 kubelet[1310]: E0924 00:05:52.640591    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136352640129305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:05:52 ha-959539 kubelet[1310]: E0924 00:05:52.640630    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136352640129305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:02 ha-959539 kubelet[1310]: E0924 00:06:02.642027    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136362641594633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:02 ha-959539 kubelet[1310]: E0924 00:06:02.642364    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136362641594633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.540506    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:06:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:06:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.644146    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136372643846607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:12 ha-959539 kubelet[1310]: E0924 00:06:12.644181    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136372643846607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:22 ha-959539 kubelet[1310]: E0924 00:06:22.646770    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136382645975347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:22 ha-959539 kubelet[1310]: E0924 00:06:22.647251    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136382645975347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:32 ha-959539 kubelet[1310]: E0924 00:06:32.649495    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136392649118233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:32 ha-959539 kubelet[1310]: E0924 00:06:32.649564    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136392649118233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:42 ha-959539 kubelet[1310]: E0924 00:06:42.653002    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136402652156854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:06:42 ha-959539 kubelet[1310]: E0924 00:06:42.653423    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136402652156854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-959539 -n ha-959539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-959539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-959539 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-959539 -v=7 --alsologtostderr
E0924 00:08:38.361801   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-959539 -v=7 --alsologtostderr: exit status 82 (2m1.823737109s)

                                                
                                                
-- stdout --
	* Stopping node "ha-959539-m04"  ...
	* Stopping node "ha-959539-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:06:50.858988   31402 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:06:50.859221   31402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:06:50.859229   31402 out.go:358] Setting ErrFile to fd 2...
	I0924 00:06:50.859234   31402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:06:50.859410   31402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:06:50.859644   31402 out.go:352] Setting JSON to false
	I0924 00:06:50.859730   31402 mustload.go:65] Loading cluster: ha-959539
	I0924 00:06:50.860116   31402 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:06:50.860199   31402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:06:50.860421   31402 mustload.go:65] Loading cluster: ha-959539
	I0924 00:06:50.860566   31402 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:06:50.860605   31402 stop.go:39] StopHost: ha-959539-m04
	I0924 00:06:50.860978   31402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:06:50.861014   31402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:06:50.876447   31402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46343
	I0924 00:06:50.876995   31402 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:06:50.877596   31402 main.go:141] libmachine: Using API Version  1
	I0924 00:06:50.877627   31402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:06:50.877949   31402 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:06:50.880659   31402 out.go:177] * Stopping node "ha-959539-m04"  ...
	I0924 00:06:50.882616   31402 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 00:06:50.882662   31402 main.go:141] libmachine: (ha-959539-m04) Calling .DriverName
	I0924 00:06:50.882984   31402 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 00:06:50.883009   31402 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHHostname
	I0924 00:06:50.886324   31402 main.go:141] libmachine: (ha-959539-m04) DBG | domain ha-959539-m04 has defined MAC address 52:54:00:e9:1e:08 in network mk-ha-959539
	I0924 00:06:50.886816   31402 main.go:141] libmachine: (ha-959539-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1e:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:03:15 +0000 UTC Type:0 Mac:52:54:00:e9:1e:08 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-959539-m04 Clientid:01:52:54:00:e9:1e:08}
	I0924 00:06:50.886845   31402 main.go:141] libmachine: (ha-959539-m04) DBG | domain ha-959539-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:e9:1e:08 in network mk-ha-959539
	I0924 00:06:50.886991   31402 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHPort
	I0924 00:06:50.887174   31402 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHKeyPath
	I0924 00:06:50.887359   31402 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHUsername
	I0924 00:06:50.887491   31402 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m04/id_rsa Username:docker}
	I0924 00:06:50.977298   31402 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 00:06:51.031234   31402 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 00:06:51.085289   31402 main.go:141] libmachine: Stopping "ha-959539-m04"...
	I0924 00:06:51.085329   31402 main.go:141] libmachine: (ha-959539-m04) Calling .GetState
	I0924 00:06:51.086907   31402 main.go:141] libmachine: (ha-959539-m04) Calling .Stop
	I0924 00:06:51.090740   31402 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 0/120
	I0924 00:06:52.206492   31402 main.go:141] libmachine: (ha-959539-m04) Calling .GetState
	I0924 00:06:52.208041   31402 main.go:141] libmachine: Machine "ha-959539-m04" was stopped.
	I0924 00:06:52.208069   31402 stop.go:75] duration metric: took 1.325451295s to stop
	I0924 00:06:52.208091   31402 stop.go:39] StopHost: ha-959539-m03
	I0924 00:06:52.208463   31402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:06:52.208508   31402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:06:52.223301   31402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0924 00:06:52.223749   31402 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:06:52.224206   31402 main.go:141] libmachine: Using API Version  1
	I0924 00:06:52.224223   31402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:06:52.224586   31402 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:06:52.226964   31402 out.go:177] * Stopping node "ha-959539-m03"  ...
	I0924 00:06:52.228427   31402 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 00:06:52.228456   31402 main.go:141] libmachine: (ha-959539-m03) Calling .DriverName
	I0924 00:06:52.228690   31402 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 00:06:52.228711   31402 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHHostname
	I0924 00:06:52.231653   31402 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:06:52.231992   31402 main.go:141] libmachine: (ha-959539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:b3:10", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:01:45 +0000 UTC Type:0 Mac:52:54:00:b3:b3:10 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-959539-m03 Clientid:01:52:54:00:b3:b3:10}
	I0924 00:06:52.232019   31402 main.go:141] libmachine: (ha-959539-m03) DBG | domain ha-959539-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:b3:b3:10 in network mk-ha-959539
	I0924 00:06:52.232181   31402 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHPort
	I0924 00:06:52.232322   31402 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHKeyPath
	I0924 00:06:52.232462   31402 main.go:141] libmachine: (ha-959539-m03) Calling .GetSSHUsername
	I0924 00:06:52.232580   31402 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m03/id_rsa Username:docker}
	I0924 00:06:52.316621   31402 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 00:06:52.369237   31402 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 00:06:52.424228   31402 main.go:141] libmachine: Stopping "ha-959539-m03"...
	I0924 00:06:52.424251   31402 main.go:141] libmachine: (ha-959539-m03) Calling .GetState
	I0924 00:06:52.425937   31402 main.go:141] libmachine: (ha-959539-m03) Calling .Stop
	I0924 00:06:52.430076   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 0/120
	I0924 00:06:53.431530   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 1/120
	I0924 00:06:54.432927   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 2/120
	I0924 00:06:55.434400   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 3/120
	I0924 00:06:56.435937   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 4/120
	I0924 00:06:57.437902   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 5/120
	I0924 00:06:58.439845   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 6/120
	I0924 00:06:59.441217   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 7/120
	I0924 00:07:00.442965   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 8/120
	I0924 00:07:01.444628   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 9/120
	I0924 00:07:02.447133   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 10/120
	I0924 00:07:03.448641   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 11/120
	I0924 00:07:04.450336   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 12/120
	I0924 00:07:05.451720   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 13/120
	I0924 00:07:06.453216   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 14/120
	I0924 00:07:07.455213   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 15/120
	I0924 00:07:08.456867   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 16/120
	I0924 00:07:09.458463   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 17/120
	I0924 00:07:10.460100   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 18/120
	I0924 00:07:11.461938   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 19/120
	I0924 00:07:12.464212   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 20/120
	I0924 00:07:13.465707   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 21/120
	I0924 00:07:14.468004   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 22/120
	I0924 00:07:15.470148   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 23/120
	I0924 00:07:16.472434   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 24/120
	I0924 00:07:17.474212   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 25/120
	I0924 00:07:18.475735   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 26/120
	I0924 00:07:19.477174   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 27/120
	I0924 00:07:20.478999   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 28/120
	I0924 00:07:21.480366   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 29/120
	I0924 00:07:22.482480   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 30/120
	I0924 00:07:23.483924   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 31/120
	I0924 00:07:24.485586   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 32/120
	I0924 00:07:25.487031   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 33/120
	I0924 00:07:26.488567   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 34/120
	I0924 00:07:27.490227   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 35/120
	I0924 00:07:28.491529   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 36/120
	I0924 00:07:29.493103   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 37/120
	I0924 00:07:30.494549   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 38/120
	I0924 00:07:31.495719   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 39/120
	I0924 00:07:32.497685   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 40/120
	I0924 00:07:33.500594   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 41/120
	I0924 00:07:34.501968   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 42/120
	I0924 00:07:35.503282   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 43/120
	I0924 00:07:36.504372   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 44/120
	I0924 00:07:37.506206   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 45/120
	I0924 00:07:38.507550   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 46/120
	I0924 00:07:39.509000   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 47/120
	I0924 00:07:40.510672   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 48/120
	I0924 00:07:41.511894   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 49/120
	I0924 00:07:42.513653   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 50/120
	I0924 00:07:43.515170   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 51/120
	I0924 00:07:44.516566   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 52/120
	I0924 00:07:45.518815   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 53/120
	I0924 00:07:46.520305   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 54/120
	I0924 00:07:47.522687   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 55/120
	I0924 00:07:48.524396   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 56/120
	I0924 00:07:49.525756   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 57/120
	I0924 00:07:50.527159   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 58/120
	I0924 00:07:51.528386   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 59/120
	I0924 00:07:52.530286   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 60/120
	I0924 00:07:53.531928   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 61/120
	I0924 00:07:54.533366   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 62/120
	I0924 00:07:55.534915   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 63/120
	I0924 00:07:56.536526   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 64/120
	I0924 00:07:57.538934   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 65/120
	I0924 00:07:58.540608   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 66/120
	I0924 00:07:59.542367   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 67/120
	I0924 00:08:00.543885   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 68/120
	I0924 00:08:01.545489   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 69/120
	I0924 00:08:02.547612   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 70/120
	I0924 00:08:03.549111   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 71/120
	I0924 00:08:04.551570   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 72/120
	I0924 00:08:05.553169   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 73/120
	I0924 00:08:06.555187   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 74/120
	I0924 00:08:07.556917   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 75/120
	I0924 00:08:08.559185   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 76/120
	I0924 00:08:09.560862   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 77/120
	I0924 00:08:10.562888   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 78/120
	I0924 00:08:11.564162   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 79/120
	I0924 00:08:12.565811   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 80/120
	I0924 00:08:13.567486   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 81/120
	I0924 00:08:14.569183   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 82/120
	I0924 00:08:15.570957   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 83/120
	I0924 00:08:16.572550   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 84/120
	I0924 00:08:17.574704   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 85/120
	I0924 00:08:18.575958   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 86/120
	I0924 00:08:19.577620   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 87/120
	I0924 00:08:20.578971   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 88/120
	I0924 00:08:21.580279   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 89/120
	I0924 00:08:22.582297   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 90/120
	I0924 00:08:23.583680   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 91/120
	I0924 00:08:24.585334   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 92/120
	I0924 00:08:25.587104   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 93/120
	I0924 00:08:26.588457   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 94/120
	I0924 00:08:27.590391   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 95/120
	I0924 00:08:28.592017   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 96/120
	I0924 00:08:29.593543   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 97/120
	I0924 00:08:30.594986   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 98/120
	I0924 00:08:31.596169   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 99/120
	I0924 00:08:32.598050   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 100/120
	I0924 00:08:33.599618   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 101/120
	I0924 00:08:34.601262   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 102/120
	I0924 00:08:35.602977   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 103/120
	I0924 00:08:36.604259   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 104/120
	I0924 00:08:37.606696   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 105/120
	I0924 00:08:38.608127   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 106/120
	I0924 00:08:39.609861   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 107/120
	I0924 00:08:40.611465   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 108/120
	I0924 00:08:41.613626   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 109/120
	I0924 00:08:42.615755   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 110/120
	I0924 00:08:43.617230   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 111/120
	I0924 00:08:44.619084   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 112/120
	I0924 00:08:45.620617   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 113/120
	I0924 00:08:46.622713   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 114/120
	I0924 00:08:47.624206   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 115/120
	I0924 00:08:48.625638   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 116/120
	I0924 00:08:49.627063   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 117/120
	I0924 00:08:50.628674   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 118/120
	I0924 00:08:51.630728   31402 main.go:141] libmachine: (ha-959539-m03) Waiting for machine to stop 119/120
	I0924 00:08:52.632205   31402 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 00:08:52.632259   31402 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 00:08:52.634069   31402 out.go:201] 
	W0924 00:08:52.635420   31402 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 00:08:52.635441   31402 out.go:270] * 
	* 
	W0924 00:08:52.637908   31402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 00:08:52.639362   31402 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-959539 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-959539 --wait=true -v=7 --alsologtostderr
E0924 00:09:06.068103   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:10:43.332907   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:12:06.406040   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-959539 --wait=true -v=7 --alsologtostderr: (4m4.38283869s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-959539
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-959539 -n ha-959539
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 logs -n 25: (1.77564312s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m04 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp testdata/cp-test.txt                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m04_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03:/home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m03 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-959539 node stop m02 -v=7                                                     | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-959539 node start m02 -v=7                                                    | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-959539 -v=7                                                           | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-959539 -v=7                                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-959539 --wait=true -v=7                                                    | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:08 UTC | 24 Sep 24 00:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-959539                                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:12 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:08:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:08:52.685349   31919 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:08:52.685608   31919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:08:52.685617   31919 out.go:358] Setting ErrFile to fd 2...
	I0924 00:08:52.685621   31919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:08:52.685791   31919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:08:52.686326   31919 out.go:352] Setting JSON to false
	I0924 00:08:52.687161   31919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3077,"bootTime":1727133456,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:08:52.687248   31919 start.go:139] virtualization: kvm guest
	I0924 00:08:52.689759   31919 out.go:177] * [ha-959539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:08:52.691290   31919 notify.go:220] Checking for updates...
	I0924 00:08:52.691333   31919 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:08:52.692824   31919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:08:52.694166   31919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:08:52.695691   31919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:08:52.697039   31919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:08:52.698385   31919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:08:52.700566   31919 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:08:52.700716   31919 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:08:52.701382   31919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:08:52.701441   31919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:08:52.716686   31919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
	I0924 00:08:52.717166   31919 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:08:52.717761   31919 main.go:141] libmachine: Using API Version  1
	I0924 00:08:52.717797   31919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:08:52.718181   31919 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:08:52.718378   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:08:52.754491   31919 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 00:08:52.755854   31919 start.go:297] selected driver: kvm2
	I0924 00:08:52.755871   31919 start.go:901] validating driver "kvm2" against &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:08:52.756034   31919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:08:52.756466   31919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:08:52.756559   31919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:08:52.772555   31919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:08:52.773297   31919 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:08:52.773335   31919 cni.go:84] Creating CNI manager for ""
	I0924 00:08:52.773386   31919 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 00:08:52.773435   31919 start.go:340] cluster config:
	{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:08:52.773572   31919 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:08:52.775415   31919 out.go:177] * Starting "ha-959539" primary control-plane node in "ha-959539" cluster
	I0924 00:08:52.776455   31919 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:08:52.776518   31919 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 00:08:52.776541   31919 cache.go:56] Caching tarball of preloaded images
	I0924 00:08:52.776625   31919 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:08:52.776636   31919 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:08:52.776742   31919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:08:52.776950   31919 start.go:360] acquireMachinesLock for ha-959539: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:08:52.776994   31919 start.go:364] duration metric: took 25.171µs to acquireMachinesLock for "ha-959539"
	I0924 00:08:52.777011   31919 start.go:96] Skipping create...Using existing machine configuration
	I0924 00:08:52.777018   31919 fix.go:54] fixHost starting: 
	I0924 00:08:52.777251   31919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:08:52.777281   31919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:08:52.792654   31919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
	I0924 00:08:52.793082   31919 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:08:52.793531   31919 main.go:141] libmachine: Using API Version  1
	I0924 00:08:52.793552   31919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:08:52.793910   31919 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:08:52.794080   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:08:52.794214   31919 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:08:52.796029   31919 fix.go:112] recreateIfNeeded on ha-959539: state=Running err=<nil>
	W0924 00:08:52.796065   31919 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 00:08:52.798146   31919 out.go:177] * Updating the running kvm2 "ha-959539" VM ...
	I0924 00:08:52.799424   31919 machine.go:93] provisionDockerMachine start ...
	I0924 00:08:52.799448   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:08:52.799664   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:52.802404   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.802823   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:52.802851   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.803000   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:52.803174   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.803339   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.803461   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:52.803607   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:52.803871   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:52.803886   31919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 00:08:52.921507   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539
	
	I0924 00:08:52.921630   31919 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0924 00:08:52.921863   31919 buildroot.go:166] provisioning hostname "ha-959539"
	I0924 00:08:52.921886   31919 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0924 00:08:52.922111   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:52.925216   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.925636   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:52.925662   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.925840   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:52.926027   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.926243   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.926375   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:52.926518   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:52.926733   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:52.926751   31919 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539 && echo "ha-959539" | sudo tee /etc/hostname
	I0924 00:08:53.060269   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539
	
	I0924 00:08:53.060296   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.063210   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.063659   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.063682   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.063976   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:53.064201   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.064366   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.064561   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:53.064739   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:53.064935   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:53.064957   31919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:08:53.181231   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:08:53.181262   31919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:08:53.181293   31919 buildroot.go:174] setting up certificates
	I0924 00:08:53.181307   31919 provision.go:84] configureAuth start
	I0924 00:08:53.181317   31919 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0924 00:08:53.181591   31919 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0924 00:08:53.184528   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.185054   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.185086   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.185221   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.187479   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.187811   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.187833   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.187961   31919 provision.go:143] copyHostCerts
	I0924 00:08:53.187986   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:08:53.188017   31919 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:08:53.188033   31919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:08:53.188104   31919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:08:53.188191   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:08:53.188208   31919 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:08:53.188215   31919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:08:53.188238   31919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:08:53.188290   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:08:53.188306   31919 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:08:53.188316   31919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:08:53.188365   31919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:08:53.188424   31919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539 san=[127.0.0.1 192.168.39.231 ha-959539 localhost minikube]
	I0924 00:08:53.384663   31919 provision.go:177] copyRemoteCerts
	I0924 00:08:53.384727   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:08:53.384751   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.388484   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.388870   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.388890   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.389109   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:53.389298   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.389463   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:53.389575   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:08:53.480442   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:08:53.480525   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:08:53.508343   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:08:53.508422   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0924 00:08:53.533680   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:08:53.533752   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 00:08:53.558917   31919 provision.go:87] duration metric: took 377.595737ms to configureAuth
	I0924 00:08:53.558958   31919 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:08:53.559186   31919 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:08:53.559276   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.562111   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.562598   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.562629   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.562817   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:53.563003   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.563271   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.563453   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:53.563687   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:53.563902   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:53.563923   31919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:10:24.471182   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:10:24.471215   31919 machine.go:96] duration metric: took 1m31.671776831s to provisionDockerMachine
	I0924 00:10:24.471229   31919 start.go:293] postStartSetup for "ha-959539" (driver="kvm2")
	I0924 00:10:24.471243   31919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:10:24.471265   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.471671   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:10:24.471710   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.475344   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.475888   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.475910   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.476123   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.476340   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.476551   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.476676   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:10:24.564808   31919 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:10:24.569482   31919 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:10:24.569516   31919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:10:24.569585   31919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:10:24.569708   31919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:10:24.569724   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:10:24.569840   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:10:24.580003   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:10:24.603835   31919 start.go:296] duration metric: took 132.592845ms for postStartSetup
	I0924 00:10:24.603881   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.604229   31919 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0924 00:10:24.604266   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.607159   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.607533   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.607561   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.607737   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.607926   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.608048   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.608158   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	W0924 00:10:24.695818   31919 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0924 00:10:24.695842   31919 fix.go:56] duration metric: took 1m31.918823819s for fixHost
	I0924 00:10:24.695868   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.698746   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.699102   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.699132   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.699378   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.699601   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.699772   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.699888   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.700043   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:10:24.700206   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:10:24.700217   31919 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:10:24.812998   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136624.781175531
	
	I0924 00:10:24.813023   31919 fix.go:216] guest clock: 1727136624.781175531
	I0924 00:10:24.813030   31919 fix.go:229] Guest: 2024-09-24 00:10:24.781175531 +0000 UTC Remote: 2024-09-24 00:10:24.69584949 +0000 UTC m=+92.046503324 (delta=85.326041ms)
	I0924 00:10:24.813048   31919 fix.go:200] guest clock delta is within tolerance: 85.326041ms
	I0924 00:10:24.813052   31919 start.go:83] releasing machines lock for "ha-959539", held for 1m32.036051957s
	I0924 00:10:24.813069   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.813329   31919 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0924 00:10:24.816033   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.816431   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.816460   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.816643   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.817127   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.817304   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.817424   31919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:10:24.817462   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.817485   31919 ssh_runner.go:195] Run: cat /version.json
	I0924 00:10:24.817506   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.820119   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.820492   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.820594   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.820618   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.820889   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.821017   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.821040   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.821041   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.821151   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.821205   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.821285   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.821287   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:10:24.821411   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.821538   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:10:24.902013   31919 ssh_runner.go:195] Run: systemctl --version
	I0924 00:10:24.943589   31919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:10:25.105237   31919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:10:25.112718   31919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:10:25.112793   31919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:10:25.122522   31919 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 00:10:25.122553   31919 start.go:495] detecting cgroup driver to use...
	I0924 00:10:25.122617   31919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:10:25.139929   31919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:10:25.154794   31919 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:10:25.154865   31919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:10:25.169153   31919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:10:25.183526   31919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:10:25.334458   31919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:10:25.481882   31919 docker.go:233] disabling docker service ...
	I0924 00:10:25.481951   31919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:10:25.498553   31919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:10:25.513036   31919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:10:25.661545   31919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:10:25.811160   31919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:10:25.825234   31919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:10:25.844750   31919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:10:25.844812   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.855450   31919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:10:25.855507   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.866282   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.877559   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.888508   31919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:10:25.899815   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.910115   31919 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.921194   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.931764   31919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:10:25.941307   31919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:10:25.951115   31919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:10:26.095684   31919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:10:32.953417   31919 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.8576933s)
	I0924 00:10:32.953451   31919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:10:32.953499   31919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:10:32.958303   31919 start.go:563] Will wait 60s for crictl version
	I0924 00:10:32.958372   31919 ssh_runner.go:195] Run: which crictl
	I0924 00:10:32.962490   31919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:10:33.001527   31919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:10:33.001626   31919 ssh_runner.go:195] Run: crio --version
	I0924 00:10:33.033714   31919 ssh_runner.go:195] Run: crio --version
	I0924 00:10:33.064319   31919 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:10:33.065748   31919 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0924 00:10:33.068552   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:33.069091   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:33.069151   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:33.069468   31919 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:10:33.074592   31919 kubeadm.go:883] updating cluster {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:10:33.074730   31919 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:10:33.074768   31919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:10:33.117653   31919 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:10:33.117685   31919 crio.go:433] Images already preloaded, skipping extraction
	I0924 00:10:33.117750   31919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:10:33.155847   31919 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:10:33.155870   31919 cache_images.go:84] Images are preloaded, skipping loading
	I0924 00:10:33.155878   31919 kubeadm.go:934] updating node { 192.168.39.231 8443 v1.31.1 crio true true} ...
	I0924 00:10:33.155961   31919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:10:33.156018   31919 ssh_runner.go:195] Run: crio config
	I0924 00:10:33.200603   31919 cni.go:84] Creating CNI manager for ""
	I0924 00:10:33.200629   31919 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 00:10:33.200640   31919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:10:33.200661   31919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-959539 NodeName:ha-959539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 00:10:33.200793   31919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-959539"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:10:33.200812   31919 kube-vip.go:115] generating kube-vip config ...
	I0924 00:10:33.200851   31919 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:10:33.211732   31919 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:10:33.211837   31919 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:10:33.211888   31919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:10:33.220808   31919 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:10:33.220868   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0924 00:10:33.229881   31919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0924 00:10:33.246234   31919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:10:33.262833   31919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0924 00:10:33.278877   31919 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:10:33.294861   31919 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:10:33.299412   31919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:10:33.442681   31919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:10:33.457412   31919 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.231
	I0924 00:10:33.457439   31919 certs.go:194] generating shared ca certs ...
	I0924 00:10:33.457458   31919 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:10:33.457624   31919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:10:33.457672   31919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:10:33.457681   31919 certs.go:256] generating profile certs ...
	I0924 00:10:33.457751   31919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:10:33.457776   31919 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7
	I0924 00:10:33.457788   31919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.244 192.168.39.254]
	I0924 00:10:33.593605   31919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7 ...
	I0924 00:10:33.593633   31919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7: {Name:mk28c0d2f20c537ae5dfc7e2724bfca944ff3319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:10:33.593793   31919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7 ...
	I0924 00:10:33.593803   31919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7: {Name:mk496e8508849330969fb494a01931fa5b69e592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:10:33.593870   31919 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:10:33.594029   31919 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:10:33.594150   31919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:10:33.594165   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:10:33.594181   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:10:33.594191   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:10:33.594202   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:10:33.594211   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:10:33.594220   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:10:33.594231   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:10:33.594240   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:10:33.594290   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:10:33.594316   31919 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:10:33.594325   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:10:33.594346   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:10:33.594367   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:10:33.594396   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:10:33.594434   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:10:33.594469   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.594487   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.594499   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.595063   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:10:33.620767   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:10:33.644715   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:10:33.668402   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:10:33.693706   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 00:10:33.717918   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:10:33.741152   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:10:33.764140   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:10:33.787838   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:10:33.810766   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:10:33.834716   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:10:33.858277   31919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:10:33.874289   31919 ssh_runner.go:195] Run: openssl version
	I0924 00:10:33.880431   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:10:33.891818   31919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.896371   31919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.896430   31919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.902136   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:10:33.911125   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:10:33.921870   31919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.926543   31919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.926595   31919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.932079   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:10:33.942135   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:10:33.953072   31919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.957843   31919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.957904   31919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.964367   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:10:33.974503   31919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:10:33.979487   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 00:10:33.985486   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 00:10:33.991104   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 00:10:33.997231   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 00:10:34.003123   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 00:10:34.009050   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 00:10:34.014749   31919 kubeadm.go:392] StartCluster: {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:10:34.014875   31919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 00:10:34.014919   31919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:10:34.050735   31919 cri.go:89] found id: "b4946d654493630ff5fc1b26e79d378819aee7d8cc2c2b71c41e181e2c332b25"
	I0924 00:10:34.050755   31919 cri.go:89] found id: "086fc2e6e3fc0463dc06bea338d3ed77a46bbad21f29e0aea689de61a44231da"
	I0924 00:10:34.050759   31919 cri.go:89] found id: "a556fce95711333452f2b7846b2dd73b91597f96f301f1d6c58eea0c2726a46d"
	I0924 00:10:34.050762   31919 cri.go:89] found id: "1d2e00cf042e4362bbcfb0003da9c8309672413f33d340c23d7b1e058c24daaf"
	I0924 00:10:34.050764   31919 cri.go:89] found id: "05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137"
	I0924 00:10:34.050767   31919 cri.go:89] found id: "e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0"
	I0924 00:10:34.050770   31919 cri.go:89] found id: "1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2"
	I0924 00:10:34.050773   31919 cri.go:89] found id: "cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b"
	I0924 00:10:34.050775   31919 cri.go:89] found id: "b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c"
	I0924 00:10:34.050779   31919 cri.go:89] found id: "d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2"
	I0924 00:10:34.050782   31919 cri.go:89] found id: "af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd"
	I0924 00:10:34.050784   31919 cri.go:89] found id: "a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974"
	I0924 00:10:34.050787   31919 cri.go:89] found id: "8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288"
	I0924 00:10:34.050789   31919 cri.go:89] found id: ""
	I0924 00:10:34.050829   31919 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.721969172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9323cbf-6040-44d8-bc14-7184050cdb02 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.722399036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9323cbf-6040-44d8-bc14-7184050cdb02 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.765165356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eef6290b-439e-4267-b59a-51e0bbf068e3 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.765241886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eef6290b-439e-4267-b59a-51e0bbf068e3 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.766571664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6c08e70-3b73-4f57-86bf-12c75eeb3462 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.767056430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136777767025154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6c08e70-3b73-4f57-86bf-12c75eeb3462 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.767475465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e8984cd-5444-4d2b-bed7-b5964161ca31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.767555465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e8984cd-5444-4d2b-bed7-b5964161ca31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.768031644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e8984cd-5444-4d2b-bed7-b5964161ca31 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.785746138Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=748dcd35-51b4-4a3c-be91-96a3ee525323 name=/runtime.v1.RuntimeService/Status
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.785830892Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=748dcd35-51b4-4a3c-be91-96a3ee525323 name=/runtime.v1.RuntimeService/Status
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.810603678Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81cadc46-5230-43fb-a4b4-24d667819e0c name=/runtime.v1.RuntimeService/Version
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.810724755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81cadc46-5230-43fb-a4b4-24d667819e0c name=/runtime.v1.RuntimeService/Version
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.811985369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98b636a0-ab0e-4aa7-8bb2-33c2662930b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.812647813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136777812625304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98b636a0-ab0e-4aa7-8bb2-33c2662930b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.813163047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fdbd752-28f8-45cd-9f5c-d33a723ac313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.813217941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fdbd752-28f8-45cd-9f5c-d33a723ac313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.814294621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fdbd752-28f8-45cd-9f5c-d33a723ac313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.862240813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d38f80f-beff-48e8-a148-9c783673b3cf name=/runtime.v1.RuntimeService/Version
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.862376778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d38f80f-beff-48e8-a148-9c783673b3cf name=/runtime.v1.RuntimeService/Version
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.863543232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43779a81-3a83-4961-92a2-df5d27d7f20a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.864030125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136777864004658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43779a81-3a83-4961-92a2-df5d27d7f20a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.864626808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=feab7f85-268b-492e-840c-a8e1cac8fe3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.864701581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=feab7f85-268b-492e-840c-a8e1cac8fe3d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:12:57 ha-959539 crio[3592]: time="2024-09-24 00:12:57.865135907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=feab7f85-268b-492e-840c-a8e1cac8fe3d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6cc25f35ee143       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   088721be9ee42       storage-provisioner
	7027a9fd4e666       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   4617c864ab9fe       kube-controller-manager-ha-959539
	d758bd78eee85       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   295fd3d0be23e       kube-apiserver-ha-959539
	7f437caaea72c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   3da9d510a2e3f       busybox-7dff88458-7q7xr
	18f201ca9f86c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   3163f953f6982       kube-vip-ha-959539
	9e619702803fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   088721be9ee42       storage-provisioner
	2ee5e707e7481       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   cda71a093abf7       coredns-7c65d6cfc9-ss8lg
	24b63f0ac92bd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   7cd33d17d04b5       kube-scheduler-ha-959539
	a379b9ee451f6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   295fd3d0be23e       kube-apiserver-ha-959539
	521d145c726f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   4617c864ab9fe       kube-controller-manager-ha-959539
	e2ffd4f65dffe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   bcabc506d1d8f       kube-proxy-qzklc
	4f62ad5889916       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   8b7cd131c6f6e       etcd-ha-959539
	851bbb971d887       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   fd7203009ec58       kindnet-qlqss
	59085f22fd0a5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   7f9f3a7de5177       coredns-7c65d6cfc9-nkbzw
	ae8646f943f6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4b5dbf2a21893       busybox-7dff88458-7q7xr
	05d43a4d13300       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   a91a16106518a       coredns-7c65d6cfc9-nkbzw
	e7a1a19a83d49       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   1a4ee0160fc1d       coredns-7c65d6cfc9-ss8lg
	1596300e66cf2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   1a380d0471083       kindnet-qlqss
	cdf912809c47a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   72ade1a051045       kube-proxy-qzklc
	d5459f3bc533d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   40d143641822b       etcd-ha-959539
	af224d12661c4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   7328f59cdb993       kube-scheduler-ha-959539
	
	
	==> coredns [05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137] <==
	[INFO] 10.244.0.4:58501 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017716872s
	[INFO] 10.244.0.4:37973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002021s
	[INFO] 10.244.0.4:43904 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156858s
	[INFO] 10.244.0.4:48352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163626s
	[INFO] 10.244.1.2:52896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132298s
	[INFO] 10.244.1.2:45449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227639s
	[INFO] 10.244.1.2:47616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017286s
	[INFO] 10.244.1.2:33521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108761s
	[INFO] 10.244.1.2:43587 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012987s
	[INFO] 10.244.2.2:52394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001362s
	[INFO] 10.244.2.2:43819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119859s
	[INFO] 10.244.2.2:35291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097457s
	[INFO] 10.244.2.2:56966 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168721s
	[INFO] 10.244.0.4:52779 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102739s
	[INFO] 10.244.2.2:59382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262295s
	[INFO] 10.244.2.2:44447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133384s
	[INFO] 10.244.2.2:52951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170462s
	[INFO] 10.244.2.2:46956 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215226s
	[INFO] 10.244.2.2:53703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108727s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1784&timeout=8m45s&timeoutSeconds=525&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1778&timeout=8m31s&timeoutSeconds=511&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1784": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1784": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1327334944]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 00:10:52.323) (total time: 11300ms):
	Trace[1327334944]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33856->10.96.0.1:443: read: connection reset by peer 11299ms (00:11:03.623)
	Trace[1327334944]: [11.300024519s] [11.300024519s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[100585239]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 00:10:45.160) (total time: 10001ms):
	Trace[100585239]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:10:55.161)
	Trace[100585239]: [10.001253763s] [10.001253763s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58276->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58276->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58290->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58290->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0] <==
	[INFO] 10.244.0.4:43743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119977s
	[INFO] 10.244.1.2:32867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192169s
	[INFO] 10.244.1.2:43403 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167697s
	[INFO] 10.244.1.2:57243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095722s
	[INFO] 10.244.1.2:48326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119715s
	[INFO] 10.244.2.2:49664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122596s
	[INFO] 10.244.2.2:40943 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106169s
	[INFO] 10.244.0.4:36066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121758s
	[INFO] 10.244.0.4:51023 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156225s
	[INFO] 10.244.0.4:56715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125631s
	[INFO] 10.244.0.4:47944 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103261s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148466s
	[INFO] 10.244.1.2:54979 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116145s
	[INFO] 10.244.1.2:47442 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097064s
	[INFO] 10.244.1.2:38143 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188037s
	[INFO] 10.244.2.2:40107 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086602s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1784&timeout=6m9s&timeoutSeconds=369&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1727&timeout=6m18s&timeoutSeconds=378&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1784&timeout=9m52s&timeoutSeconds=592&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-959539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:00:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:12:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-959539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a4b9ce5eed94a13bdbc682549e1dd1e
	  System UUID:                0a4b9ce5-eed9-4a13-bdbc-682549e1dd1e
	  Boot ID:                    679e0a2b-8772-4f6d-9e47-ba8190727387
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7q7xr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-nkbzw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-ss8lg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-959539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-qlqss                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-959539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-959539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qzklc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-959539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-959539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 93s                    kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-959539 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-959539 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-959539 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                12m                    kubelet          Node ha-959539 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Warning  ContainerGCFailed        2m46s (x2 over 3m46s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m35s (x3 over 3m24s)  kubelet          Node ha-959539 status is now: NodeNotReady
	  Normal   RegisteredNode           104s                   node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   RegisteredNode           91s                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	
	
	Name:               ha-959539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:01:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:12:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    ha-959539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f78cfc70aad42d195f1884fe3a82e21
	  System UUID:                0f78cfc7-0aad-42d1-95f1-884fe3a82e21
	  Boot ID:                    516209c9-4720-45b9-91d2-754ed4405940
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m5qhr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-959539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-cbrj7                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-959539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-959539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2hlqx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-959539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-959539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-959539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node ha-959539-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-959539-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  NodeNotReady             8m10s                node-controller  Node ha-959539-m02 status is now: NodeNotReady
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node ha-959539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s                 node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           91s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           39s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	
	
	Name:               ha-959539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_02_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:12:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:12:31 +0000   Tue, 24 Sep 2024 00:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:12:31 +0000   Tue, 24 Sep 2024 00:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:12:31 +0000   Tue, 24 Sep 2024 00:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:12:31 +0000   Tue, 24 Sep 2024 00:12:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-959539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e393f2c1cce4055aaf3b67371deff0b
	  System UUID:                7e393f2c-1cce-4055-aaf3-b67371deff0b
	  Boot ID:                    34b6438d-8671-430d-bc67-e9b8bca779e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9v6l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-959539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-g4nkw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-959539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-959539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-b82ch                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-959539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-959539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-959539-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-959539-m03 status is now: NodeNotReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  57s (x3 over 58s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x3 over 58s)  kubelet          Node ha-959539-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x3 over 58s)  kubelet          Node ha-959539-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 57s (x2 over 57s)  kubelet          Node ha-959539-m03 has been rebooted, boot id: 34b6438d-8671-430d-bc67-e9b8bca779e2
	  Normal   NodeReady                57s (x2 over 57s)  kubelet          Node ha-959539-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-959539-m03 event: Registered Node ha-959539-m03 in Controller
	
	
	Name:               ha-959539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_03_32_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:03:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:12:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:12:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-959539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55d6e549bf6d4455bd4db681e2cc17b8
	  System UUID:                55d6e549-bf6d-4455-bd4d-b681e2cc17b8
	  Boot ID:                    767ecddb-37eb-4cca-8b96-d9c64515391e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54xw8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m27s
	  kube-system                 kube-proxy-8h8qr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m21s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m27s (x2 over 9m27s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m27s (x2 over 9m27s)  kubelet          Node ha-959539-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m27s (x2 over 9m27s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m25s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   RegisteredNode           9m25s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   RegisteredNode           9m25s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   NodeReady                9m6s                   kubelet          Node ha-959539-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s                   node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   RegisteredNode           91s                    node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   NodeNotReady             64s                    node-controller  Node ha-959539-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                    node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                     kubelet          Node ha-959539-m04 has been rebooted, boot id: 767ecddb-37eb-4cca-8b96-d9c64515391e
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)        kubelet          Node ha-959539-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)        kubelet          Node ha-959539-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)        kubelet          Node ha-959539-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9s                     kubelet          Node ha-959539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055717] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062835] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.175047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141488] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281309] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.886660] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[Sep24 00:00] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.061155] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.064379] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.136832] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +2.892614] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.264409] kauditd_printk_skb: 15 callbacks suppressed
	[Sep24 00:01] kauditd_printk_skb: 26 callbacks suppressed
	[Sep24 00:07] kauditd_printk_skb: 1 callbacks suppressed
	[Sep24 00:10] systemd-fstab-generator[3515]: Ignoring "noauto" option for root device
	[  +0.152411] systemd-fstab-generator[3527]: Ignoring "noauto" option for root device
	[  +0.181272] systemd-fstab-generator[3541]: Ignoring "noauto" option for root device
	[  +0.145012] systemd-fstab-generator[3553]: Ignoring "noauto" option for root device
	[  +0.286016] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	[  +7.343411] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.088960] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.481341] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.213742] kauditd_printk_skb: 87 callbacks suppressed
	[Sep24 00:11] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.424429] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376] <==
	{"level":"warn","ts":"2024-09-24T00:11:55.847292Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a30d65a0357cca60","error":"Get \"https://192.168.39.244:2380/version\": dial tcp 192.168.39.244:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-24T00:11:56.354801Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-24T00:11:56.354929Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-24T00:11:59.849122Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.244:2380/version","remote-member-id":"a30d65a0357cca60","error":"Get \"https://192.168.39.244:2380/version\": dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:11:59.849230Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a30d65a0357cca60","error":"Get \"https://192.168.39.244:2380/version\": dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:01.355450Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:01.355531Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:03.851395Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.244:2380/version","remote-member-id":"a30d65a0357cca60","error":"Get \"https://192.168.39.244:2380/version\": dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:03.851460Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a30d65a0357cca60","error":"Get \"https://192.168.39.244:2380/version\": dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:06.355827Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:06.355904Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:07.853828Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.244:2380/version","remote-member-id":"a30d65a0357cca60","error":"Get \"https://192.168.39.244:2380/version\": dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:07.853894Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a30d65a0357cca60","error":"Get \"https://192.168.39.244:2380/version\": dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-24T00:12:10.567289Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:12:10.567396Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:12:10.584398Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:12:10.586724Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6a82bbfd8eee2a80","to":"a30d65a0357cca60","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-24T00:12:10.586781Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:12:10.598532Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6a82bbfd8eee2a80","to":"a30d65a0357cca60","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-24T00:12:10.598577Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:12:11.356872Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:11.356967Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-24T00:12:11.956294Z","caller":"traceutil/trace.go:171","msg":"trace[2032135025] transaction","detail":"{read_only:false; response_revision:2273; number_of_response:1; }","duration":"150.867941ms","start":"2024-09-24T00:12:11.805392Z","end":"2024-09-24T00:12:11.956260Z","steps":["trace[2032135025] 'process raft request'  (duration: 82.825099ms)","trace[2032135025] 'compare'  (duration: 67.801127ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:12:14.357294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.191491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-09-24T00:12:14.357415Z","caller":"traceutil/trace.go:171","msg":"trace[521575336] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2283; }","duration":"152.421988ms","start":"2024-09-24T00:12:14.204980Z","end":"2024-09-24T00:12:14.357402Z","steps":["trace[521575336] 'range keys from in-memory index tree'  (duration: 151.166203ms)"],"step_count":1}
	
	
	==> etcd [d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2] <==
	{"level":"warn","ts":"2024-09-24T00:08:53.704667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T00:08:52.929656Z","time spent":"775.007777ms","remote":"127.0.0.1:42100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" limit:10000 "}
	2024/09/24 00:08:53 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-24T00:08:53.692283Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T00:08:52.940632Z","time spent":"751.645446ms","remote":"127.0.0.1:42340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:10000 "}
	2024/09/24 00:08:53 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-24T00:08:53.771502Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6a82bbfd8eee2a80","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-24T00:08:53.771782Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.771842Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.771922Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772050Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772157Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772261Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772366Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772431Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772474Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772561Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772689Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772761Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772825Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772838Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.775688Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"warn","ts":"2024-09-24T00:08:53.775781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.851504369s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-24T00:08:53.775851Z","caller":"traceutil/trace.go:171","msg":"trace[1792382580] range","detail":"{range_begin:; range_end:; }","duration":"1.851644339s","start":"2024-09-24T00:08:51.924198Z","end":"2024-09-24T00:08:53.775842Z","steps":["trace[1792382580] 'agreement among raft nodes before linearized reading'  (duration: 1.851500901s)"],"step_count":1}
	{"level":"error","ts":"2024-09-24T00:08:53.775902Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-24T00:08:53.775795Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-09-24T00:08:53.775964Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-959539","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"]}
	
	
	==> kernel <==
	 00:12:58 up 13 min,  0 users,  load average: 0.50, 0.54, 0.30
	Linux ha-959539 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2] <==
	I0924 00:08:25.413712       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:08:25.413817       1 main.go:299] handling current node
	I0924 00:08:25.413844       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:08:25.413862       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:08:25.414009       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:08:25.414031       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:08:25.414096       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:08:25.414114       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:08:35.413240       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:08:35.413283       1 main.go:299] handling current node
	I0924 00:08:35.413314       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:08:35.413319       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:08:35.413494       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:08:35.413511       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:08:35.413566       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:08:35.413582       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:08:45.417572       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:08:45.417635       1 main.go:299] handling current node
	I0924 00:08:45.417654       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:08:45.417663       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:08:45.417827       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:08:45.417854       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:08:45.417942       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:08:45.417967       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	E0924 00:08:51.930735       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e] <==
	I0924 00:12:21.427223       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:12:31.431032       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:12:31.431190       1 main.go:299] handling current node
	I0924 00:12:31.431261       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:12:31.431298       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:12:31.431533       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:12:31.431579       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:12:31.431703       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:12:31.431737       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:12:41.425410       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:12:41.425508       1 main.go:299] handling current node
	I0924 00:12:41.425535       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:12:41.425553       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:12:41.425678       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:12:41.425742       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:12:41.425865       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:12:41.425908       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:12:51.425203       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:12:51.425308       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:12:51.425542       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:12:51.425597       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:12:51.425666       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:12:51.425686       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:12:51.425748       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:12:51.425776       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a] <==
	I0924 00:10:41.000194       1 options.go:228] external host was not specified, using 192.168.39.231
	I0924 00:10:41.005156       1 server.go:142] Version: v1.31.1
	I0924 00:10:41.005221       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:10:41.365824       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0924 00:10:41.381961       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 00:10:41.386994       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0924 00:10:41.387035       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0924 00:10:41.387411       1 instance.go:232] Using reconciler: lease
	W0924 00:11:01.363900       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0924 00:11:01.364721       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0924 00:11:01.388849       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0924 00:11:01.388944       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048] <==
	I0924 00:11:20.358833       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 00:11:20.445025       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 00:11:20.453059       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 00:11:20.453391       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 00:11:20.455074       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 00:11:20.455262       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 00:11:20.455289       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 00:11:20.455429       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0924 00:11:20.455979       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 00:11:20.457868       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 00:11:20.457924       1 aggregator.go:171] initial CRD sync complete...
	I0924 00:11:20.457962       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 00:11:20.457985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 00:11:20.458006       1 cache.go:39] Caches are synced for autoregister controller
	W0924 00:11:20.476896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244 192.168.39.71]
	I0924 00:11:20.499589       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0924 00:11:20.502957       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 00:11:20.503039       1 policy_source.go:224] refreshing policies
	I0924 00:11:20.553593       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 00:11:20.578963       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 00:11:20.588460       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0924 00:11:20.593058       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0924 00:11:21.361020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0924 00:11:21.808087       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.244 192.168.39.71]
	W0924 00:11:31.946753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.71]
	
	
	==> kube-controller-manager [521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337] <==
	I0924 00:10:41.944197       1 serving.go:386] Generated self-signed cert in-memory
	I0924 00:10:42.460845       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0924 00:10:42.460941       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:10:42.462471       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 00:10:42.462619       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 00:10:42.462767       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0924 00:10:42.462824       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0924 00:11:02.465465       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.231:8443/healthz\": dial tcp 192.168.39.231:8443: connect: connection refused"
	
	
	==> kube-controller-manager [7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357] <==
	I0924 00:11:54.829619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:11:54.858613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m03"
	E0924 00:11:54.945919       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"b9ae9d46-7d48-45dc-86b9-48ca6a873f17\", ResourceVersion:\"1855\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 24, 0, 0, 12, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b00f20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\
", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001b21640), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b1a798), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolum
eSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVo
lumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b1a7b0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtua
lDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.1\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b00f60)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Res
ourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\
"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001af9980), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001c64178), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001999d80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Host
Alias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001af7c30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c641d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfille
d on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0924 00:11:55.034248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.078131ms"
	I0924 00:11:55.034565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="111.234µs"
	I0924 00:11:57.954682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:12:00.046947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m03"
	I0924 00:12:01.036873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m03"
	I0924 00:12:01.058919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m03"
	I0924 00:12:01.832091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.839µs"
	I0924 00:12:02.873547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m03"
	I0924 00:12:04.761245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m02"
	I0924 00:12:10.123712       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:12:16.360210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.623132ms"
	I0924 00:12:16.360641       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="141.41µs"
	I0924 00:12:19.183586       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:12:19.290687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:12:20.193832       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.539071ms"
	I0924 00:12:20.194041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="77.763µs"
	I0924 00:12:31.598646       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m03"
	I0924 00:12:49.770758       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	I0924 00:12:49.771169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:12:49.789142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:12:50.021564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	
	
	==> kube-proxy [cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b] <==
	E0924 00:07:36.903298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:39.974714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:39.974867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:39.974821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:39.974992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:43.110833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:43.111072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:46.182711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:46.182836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:49.256714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:49.256862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:49.257159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:49.257257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:58.472260       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:58.472477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:58.472784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:58.473286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:01.544035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:01.544239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:16.903787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:16.903928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:19.975971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:19.976271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:26.120189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:26.120318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:10:44.359946       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:10:47.432092       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:10:50.503399       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:10:56.646844       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:11:05.864838       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:11:24.296845       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0924 00:11:24.296990       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0924 00:11:24.297135       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:11:24.372435       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:11:24.372530       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:11:24.372577       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:11:24.375072       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:11:24.375541       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:11:24.375577       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:11:24.377881       1 config.go:199] "Starting service config controller"
	I0924 00:11:24.377946       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:11:24.378002       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:11:24.378018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:11:24.379035       1 config.go:328] "Starting node config controller"
	I0924 00:11:24.379070       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:11:24.479068       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:11:24.479178       1 shared_informer.go:320] Caches are synced for node config
	I0924 00:11:24.479286       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379] <==
	W0924 00:11:11.079694       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.231:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:11.079813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.231:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:11.245103       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.231:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:11.245196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.231:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:11.333216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.231:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:11.333405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.231:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:11.922948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.231:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:11.923021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.231:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.440523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.231:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.440678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.231:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.711835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.231:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.711883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.231:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.740977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.231:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.741035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.231:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.750966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.231:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.751008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.231:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.857706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.231:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.857772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.231:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:17.781248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.231:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:17.781396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.231:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:18.190692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.231:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:18.190740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.231:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:20.374239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 00:11:20.374577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 00:11:21.003980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd] <==
	E0924 00:03:31.975081       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9594238c-336e-479f-8424-bf5663475f7d(kube-system/kube-proxy-h87p2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h87p2"
	E0924 00:03:31.975198       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" pod="kube-system/kube-proxy-h87p2"
	I0924 00:03:31.975297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:32.025106       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zfglg" node="ha-959539-m04"
	E0924 00:03:32.025246       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" pod="kube-system/kindnet-zfglg"
	E0924 00:08:30.075812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:30.075915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0924 00:08:43.837663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0924 00:08:45.451114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0924 00:08:45.757200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0924 00:08:45.980962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:47.210556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0924 00:08:47.245778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0924 00:08:47.491539       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0924 00:08:47.711095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0924 00:08:48.100680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0924 00:08:48.487712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0924 00:08:48.702291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:50.030107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:51.029004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	W0924 00:08:51.119547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 00:08:51.119707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:08:52.150727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 00:08:52.150831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0924 00:08:53.686941       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 00:11:42 ha-959539 kubelet[1310]: E0924 00:11:42.721393    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136702720891627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:11:42 ha-959539 kubelet[1310]: E0924 00:11:42.721480    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136702720891627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:11:46 ha-959539 kubelet[1310]: I0924 00:11:46.524860    1310 scope.go:117] "RemoveContainer" containerID="9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888"
	Sep 24 00:11:52 ha-959539 kubelet[1310]: E0924 00:11:52.724386    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136712723400665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:11:52 ha-959539 kubelet[1310]: E0924 00:11:52.724483    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136712723400665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:02 ha-959539 kubelet[1310]: E0924 00:12:02.727193    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136722726455409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:02 ha-959539 kubelet[1310]: E0924 00:12:02.727644    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136722726455409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:12 ha-959539 kubelet[1310]: E0924 00:12:12.545428    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 00:12:12 ha-959539 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:12:12 ha-959539 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:12:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:12:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:12:12 ha-959539 kubelet[1310]: E0924 00:12:12.730733    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136732729958803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:12 ha-959539 kubelet[1310]: E0924 00:12:12.730829    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136732729958803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:16 ha-959539 kubelet[1310]: I0924 00:12:16.139822    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-7q7xr" podStartSLOduration=560.596250141 podStartE2EDuration="9m24.139785999s" podCreationTimestamp="2024-09-24 00:02:52 +0000 UTC" firstStartedPulling="2024-09-24 00:02:53.109837577 +0000 UTC m=+160.760871505" lastFinishedPulling="2024-09-24 00:02:56.653373427 +0000 UTC m=+164.304407363" observedRunningTime="2024-09-24 00:02:57.209987236 +0000 UTC m=+164.861021182" watchObservedRunningTime="2024-09-24 00:12:16.139785999 +0000 UTC m=+723.790819944"
	Sep 24 00:12:22 ha-959539 kubelet[1310]: I0924 00:12:22.523265    1310 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-959539" podUID="f80705df-80fe-48f0-a65c-b4e414523bdf"
	Sep 24 00:12:22 ha-959539 kubelet[1310]: I0924 00:12:22.547452    1310 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-959539"
	Sep 24 00:12:22 ha-959539 kubelet[1310]: E0924 00:12:22.733935    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136742732909360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:22 ha-959539 kubelet[1310]: E0924 00:12:22.734030    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136742732909360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:32 ha-959539 kubelet[1310]: E0924 00:12:32.735870    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136752735584453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:32 ha-959539 kubelet[1310]: E0924 00:12:32.735907    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136752735584453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:42 ha-959539 kubelet[1310]: E0924 00:12:42.737919    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136762737433398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:42 ha-959539 kubelet[1310]: E0924 00:12:42.738008    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136762737433398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:52 ha-959539 kubelet[1310]: E0924 00:12:52.740503    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136772739859447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:12:52 ha-959539 kubelet[1310]: E0924 00:12:52.740563    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136772739859447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:12:57.433445   33216 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19696-7623/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-959539 -n ha-959539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-959539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 stop -v=7 --alsologtostderr
E0924 00:13:38.361896   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-959539 stop -v=7 --alsologtostderr: exit status 82 (2m0.49595686s)

                                                
                                                
-- stdout --
	* Stopping node "ha-959539-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:13:16.714444   33643 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:13:16.714544   33643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:13:16.714553   33643 out.go:358] Setting ErrFile to fd 2...
	I0924 00:13:16.714557   33643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:13:16.714720   33643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:13:16.714956   33643 out.go:352] Setting JSON to false
	I0924 00:13:16.715032   33643 mustload.go:65] Loading cluster: ha-959539
	I0924 00:13:16.715425   33643 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:13:16.715504   33643 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:13:16.715682   33643 mustload.go:65] Loading cluster: ha-959539
	I0924 00:13:16.715802   33643 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:13:16.715829   33643 stop.go:39] StopHost: ha-959539-m04
	I0924 00:13:16.716195   33643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:13:16.716232   33643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:13:16.731805   33643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45785
	I0924 00:13:16.732427   33643 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:13:16.733106   33643 main.go:141] libmachine: Using API Version  1
	I0924 00:13:16.733133   33643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:13:16.733501   33643 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:13:16.736089   33643 out.go:177] * Stopping node "ha-959539-m04"  ...
	I0924 00:13:16.737448   33643 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 00:13:16.737482   33643 main.go:141] libmachine: (ha-959539-m04) Calling .DriverName
	I0924 00:13:16.737748   33643 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 00:13:16.737783   33643 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHHostname
	I0924 00:13:16.740829   33643 main.go:141] libmachine: (ha-959539-m04) DBG | domain ha-959539-m04 has defined MAC address 52:54:00:e9:1e:08 in network mk-ha-959539
	I0924 00:13:16.741335   33643 main.go:141] libmachine: (ha-959539-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1e:08", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 01:12:44 +0000 UTC Type:0 Mac:52:54:00:e9:1e:08 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-959539-m04 Clientid:01:52:54:00:e9:1e:08}
	I0924 00:13:16.741369   33643 main.go:141] libmachine: (ha-959539-m04) DBG | domain ha-959539-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:e9:1e:08 in network mk-ha-959539
	I0924 00:13:16.741465   33643 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHPort
	I0924 00:13:16.741679   33643 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHKeyPath
	I0924 00:13:16.741814   33643 main.go:141] libmachine: (ha-959539-m04) Calling .GetSSHUsername
	I0924 00:13:16.741920   33643 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539-m04/id_rsa Username:docker}
	I0924 00:13:16.826725   33643 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 00:13:16.879775   33643 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 00:13:16.934591   33643 main.go:141] libmachine: Stopping "ha-959539-m04"...
	I0924 00:13:16.934628   33643 main.go:141] libmachine: (ha-959539-m04) Calling .GetState
	I0924 00:13:16.936530   33643 main.go:141] libmachine: (ha-959539-m04) Calling .Stop
	I0924 00:13:16.940324   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 0/120
	I0924 00:13:17.941869   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 1/120
	I0924 00:13:18.944189   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 2/120
	I0924 00:13:19.945599   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 3/120
	I0924 00:13:20.947718   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 4/120
	I0924 00:13:21.949806   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 5/120
	I0924 00:13:22.951764   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 6/120
	I0924 00:13:23.953661   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 7/120
	I0924 00:13:24.955456   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 8/120
	I0924 00:13:25.956900   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 9/120
	I0924 00:13:26.959246   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 10/120
	I0924 00:13:27.960769   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 11/120
	I0924 00:13:28.963058   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 12/120
	I0924 00:13:29.964743   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 13/120
	I0924 00:13:30.967028   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 14/120
	I0924 00:13:31.969112   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 15/120
	I0924 00:13:32.970552   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 16/120
	I0924 00:13:33.972174   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 17/120
	I0924 00:13:34.973700   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 18/120
	I0924 00:13:35.975373   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 19/120
	I0924 00:13:36.977809   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 20/120
	I0924 00:13:37.979154   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 21/120
	I0924 00:13:38.980834   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 22/120
	I0924 00:13:39.982404   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 23/120
	I0924 00:13:40.984011   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 24/120
	I0924 00:13:41.986012   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 25/120
	I0924 00:13:42.987716   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 26/120
	I0924 00:13:43.989269   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 27/120
	I0924 00:13:44.990993   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 28/120
	I0924 00:13:45.992531   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 29/120
	I0924 00:13:46.994440   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 30/120
	I0924 00:13:47.995972   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 31/120
	I0924 00:13:48.997361   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 32/120
	I0924 00:13:49.998950   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 33/120
	I0924 00:13:51.000574   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 34/120
	I0924 00:13:52.003119   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 35/120
	I0924 00:13:53.004749   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 36/120
	I0924 00:13:54.007241   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 37/120
	I0924 00:13:55.009273   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 38/120
	I0924 00:13:56.010986   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 39/120
	I0924 00:13:57.013268   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 40/120
	I0924 00:13:58.014982   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 41/120
	I0924 00:13:59.016738   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 42/120
	I0924 00:14:00.019232   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 43/120
	I0924 00:14:01.020631   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 44/120
	I0924 00:14:02.022692   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 45/120
	I0924 00:14:03.024178   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 46/120
	I0924 00:14:04.025918   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 47/120
	I0924 00:14:05.027354   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 48/120
	I0924 00:14:06.028892   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 49/120
	I0924 00:14:07.030872   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 50/120
	I0924 00:14:08.032551   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 51/120
	I0924 00:14:09.035209   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 52/120
	I0924 00:14:10.036598   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 53/120
	I0924 00:14:11.039034   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 54/120
	I0924 00:14:12.041017   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 55/120
	I0924 00:14:13.042719   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 56/120
	I0924 00:14:14.043752   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 57/120
	I0924 00:14:15.045940   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 58/120
	I0924 00:14:16.047179   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 59/120
	I0924 00:14:17.049101   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 60/120
	I0924 00:14:18.050489   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 61/120
	I0924 00:14:19.052137   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 62/120
	I0924 00:14:20.053650   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 63/120
	I0924 00:14:21.055845   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 64/120
	I0924 00:14:22.057752   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 65/120
	I0924 00:14:23.059371   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 66/120
	I0924 00:14:24.060865   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 67/120
	I0924 00:14:25.062856   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 68/120
	I0924 00:14:26.064651   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 69/120
	I0924 00:14:27.067046   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 70/120
	I0924 00:14:28.068792   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 71/120
	I0924 00:14:29.071178   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 72/120
	I0924 00:14:30.072443   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 73/120
	I0924 00:14:31.073678   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 74/120
	I0924 00:14:32.076003   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 75/120
	I0924 00:14:33.078195   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 76/120
	I0924 00:14:34.079734   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 77/120
	I0924 00:14:35.081265   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 78/120
	I0924 00:14:36.083002   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 79/120
	I0924 00:14:37.085102   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 80/120
	I0924 00:14:38.086723   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 81/120
	I0924 00:14:39.088022   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 82/120
	I0924 00:14:40.090529   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 83/120
	I0924 00:14:41.091865   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 84/120
	I0924 00:14:42.093893   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 85/120
	I0924 00:14:43.095386   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 86/120
	I0924 00:14:44.096922   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 87/120
	I0924 00:14:45.099130   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 88/120
	I0924 00:14:46.100788   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 89/120
	I0924 00:14:47.102369   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 90/120
	I0924 00:14:48.103741   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 91/120
	I0924 00:14:49.105656   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 92/120
	I0924 00:14:50.107114   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 93/120
	I0924 00:14:51.108727   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 94/120
	I0924 00:14:52.111556   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 95/120
	I0924 00:14:53.112926   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 96/120
	I0924 00:14:54.115350   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 97/120
	I0924 00:14:55.116958   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 98/120
	I0924 00:14:56.119171   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 99/120
	I0924 00:14:57.121258   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 100/120
	I0924 00:14:58.122740   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 101/120
	I0924 00:14:59.124153   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 102/120
	I0924 00:15:00.125494   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 103/120
	I0924 00:15:01.126975   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 104/120
	I0924 00:15:02.129295   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 105/120
	I0924 00:15:03.131327   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 106/120
	I0924 00:15:04.133316   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 107/120
	I0924 00:15:05.134911   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 108/120
	I0924 00:15:06.136218   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 109/120
	I0924 00:15:07.138240   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 110/120
	I0924 00:15:08.139905   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 111/120
	I0924 00:15:09.141643   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 112/120
	I0924 00:15:10.143021   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 113/120
	I0924 00:15:11.144872   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 114/120
	I0924 00:15:12.147047   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 115/120
	I0924 00:15:13.148697   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 116/120
	I0924 00:15:14.150957   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 117/120
	I0924 00:15:15.153384   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 118/120
	I0924 00:15:16.155179   33643 main.go:141] libmachine: (ha-959539-m04) Waiting for machine to stop 119/120
	I0924 00:15:17.156607   33643 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 00:15:17.156663   33643 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 00:15:17.158745   33643 out.go:201] 
	W0924 00:15:17.160195   33643 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 00:15:17.160216   33643 out.go:270] * 
	* 
	W0924 00:15:17.162370   33643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 00:15:17.163465   33643 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-959539 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr: (18.915390356s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-959539 -n ha-959539
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 logs -n 25: (1.665577248s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m04 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp testdata/cp-test.txt                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539:/home/docker/cp-test_ha-959539-m04_ha-959539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539 sudo cat                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m02:/home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m02 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m03:/home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n                                                                 | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | ha-959539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-959539 ssh -n ha-959539-m03 sudo cat                                          | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC | 24 Sep 24 00:04 UTC |
	|         | /home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-959539 node stop m02 -v=7                                                     | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-959539 node start m02 -v=7                                                    | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-959539 -v=7                                                           | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-959539 -v=7                                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-959539 --wait=true -v=7                                                    | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:08 UTC | 24 Sep 24 00:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-959539                                                                | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:12 UTC |                     |
	| node    | ha-959539 node delete m03 -v=7                                                   | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:12 UTC | 24 Sep 24 00:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-959539 stop -v=7                                                              | ha-959539 | jenkins | v1.34.0 | 24 Sep 24 00:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:08:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:08:52.685349   31919 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:08:52.685608   31919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:08:52.685617   31919 out.go:358] Setting ErrFile to fd 2...
	I0924 00:08:52.685621   31919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:08:52.685791   31919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:08:52.686326   31919 out.go:352] Setting JSON to false
	I0924 00:08:52.687161   31919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3077,"bootTime":1727133456,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:08:52.687248   31919 start.go:139] virtualization: kvm guest
	I0924 00:08:52.689759   31919 out.go:177] * [ha-959539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:08:52.691290   31919 notify.go:220] Checking for updates...
	I0924 00:08:52.691333   31919 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:08:52.692824   31919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:08:52.694166   31919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:08:52.695691   31919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:08:52.697039   31919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:08:52.698385   31919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:08:52.700566   31919 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:08:52.700716   31919 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:08:52.701382   31919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:08:52.701441   31919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:08:52.716686   31919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
	I0924 00:08:52.717166   31919 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:08:52.717761   31919 main.go:141] libmachine: Using API Version  1
	I0924 00:08:52.717797   31919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:08:52.718181   31919 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:08:52.718378   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:08:52.754491   31919 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 00:08:52.755854   31919 start.go:297] selected driver: kvm2
	I0924 00:08:52.755871   31919 start.go:901] validating driver "kvm2" against &{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:08:52.756034   31919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:08:52.756466   31919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:08:52.756559   31919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:08:52.772555   31919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:08:52.773297   31919 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:08:52.773335   31919 cni.go:84] Creating CNI manager for ""
	I0924 00:08:52.773386   31919 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 00:08:52.773435   31919 start.go:340] cluster config:
	{Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:08:52.773572   31919 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:08:52.775415   31919 out.go:177] * Starting "ha-959539" primary control-plane node in "ha-959539" cluster
	I0924 00:08:52.776455   31919 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:08:52.776518   31919 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 00:08:52.776541   31919 cache.go:56] Caching tarball of preloaded images
	I0924 00:08:52.776625   31919 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:08:52.776636   31919 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:08:52.776742   31919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/config.json ...
	I0924 00:08:52.776950   31919 start.go:360] acquireMachinesLock for ha-959539: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:08:52.776994   31919 start.go:364] duration metric: took 25.171µs to acquireMachinesLock for "ha-959539"
	I0924 00:08:52.777011   31919 start.go:96] Skipping create...Using existing machine configuration
	I0924 00:08:52.777018   31919 fix.go:54] fixHost starting: 
	I0924 00:08:52.777251   31919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:08:52.777281   31919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:08:52.792654   31919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
	I0924 00:08:52.793082   31919 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:08:52.793531   31919 main.go:141] libmachine: Using API Version  1
	I0924 00:08:52.793552   31919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:08:52.793910   31919 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:08:52.794080   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:08:52.794214   31919 main.go:141] libmachine: (ha-959539) Calling .GetState
	I0924 00:08:52.796029   31919 fix.go:112] recreateIfNeeded on ha-959539: state=Running err=<nil>
	W0924 00:08:52.796065   31919 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 00:08:52.798146   31919 out.go:177] * Updating the running kvm2 "ha-959539" VM ...
	I0924 00:08:52.799424   31919 machine.go:93] provisionDockerMachine start ...
	I0924 00:08:52.799448   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:08:52.799664   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:52.802404   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.802823   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:52.802851   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.803000   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:52.803174   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.803339   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.803461   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:52.803607   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:52.803871   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:52.803886   31919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 00:08:52.921507   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539
	
	I0924 00:08:52.921630   31919 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0924 00:08:52.921863   31919 buildroot.go:166] provisioning hostname "ha-959539"
	I0924 00:08:52.921886   31919 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0924 00:08:52.922111   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:52.925216   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.925636   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:52.925662   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:52.925840   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:52.926027   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.926243   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:52.926375   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:52.926518   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:52.926733   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:52.926751   31919 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-959539 && echo "ha-959539" | sudo tee /etc/hostname
	I0924 00:08:53.060269   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-959539
	
	I0924 00:08:53.060296   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.063210   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.063659   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.063682   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.063976   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:53.064201   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.064366   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.064561   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:53.064739   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:53.064935   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:53.064957   31919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-959539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-959539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-959539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:08:53.181231   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:08:53.181262   31919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:08:53.181293   31919 buildroot.go:174] setting up certificates
	I0924 00:08:53.181307   31919 provision.go:84] configureAuth start
	I0924 00:08:53.181317   31919 main.go:141] libmachine: (ha-959539) Calling .GetMachineName
	I0924 00:08:53.181591   31919 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0924 00:08:53.184528   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.185054   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.185086   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.185221   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.187479   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.187811   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.187833   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.187961   31919 provision.go:143] copyHostCerts
	I0924 00:08:53.187986   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:08:53.188017   31919 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:08:53.188033   31919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:08:53.188104   31919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:08:53.188191   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:08:53.188208   31919 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:08:53.188215   31919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:08:53.188238   31919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:08:53.188290   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:08:53.188306   31919 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:08:53.188316   31919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:08:53.188365   31919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:08:53.188424   31919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.ha-959539 san=[127.0.0.1 192.168.39.231 ha-959539 localhost minikube]
	I0924 00:08:53.384663   31919 provision.go:177] copyRemoteCerts
	I0924 00:08:53.384727   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:08:53.384751   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.388484   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.388870   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.388890   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.389109   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:53.389298   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.389463   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:53.389575   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:08:53.480442   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:08:53.480525   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:08:53.508343   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:08:53.508422   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0924 00:08:53.533680   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:08:53.533752   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 00:08:53.558917   31919 provision.go:87] duration metric: took 377.595737ms to configureAuth
	I0924 00:08:53.558958   31919 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:08:53.559186   31919 config.go:182] Loaded profile config "ha-959539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:08:53.559276   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:08:53.562111   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.562598   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:08:53.562629   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:08:53.562817   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:08:53.563003   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.563271   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:08:53.563453   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:08:53.563687   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:08:53.563902   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:08:53.563923   31919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:10:24.471182   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:10:24.471215   31919 machine.go:96] duration metric: took 1m31.671776831s to provisionDockerMachine
	I0924 00:10:24.471229   31919 start.go:293] postStartSetup for "ha-959539" (driver="kvm2")
	I0924 00:10:24.471243   31919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:10:24.471265   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.471671   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:10:24.471710   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.475344   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.475888   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.475910   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.476123   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.476340   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.476551   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.476676   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:10:24.564808   31919 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:10:24.569482   31919 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:10:24.569516   31919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:10:24.569585   31919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:10:24.569708   31919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:10:24.569724   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:10:24.569840   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:10:24.580003   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:10:24.603835   31919 start.go:296] duration metric: took 132.592845ms for postStartSetup
	I0924 00:10:24.603881   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.604229   31919 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0924 00:10:24.604266   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.607159   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.607533   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.607561   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.607737   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.607926   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.608048   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.608158   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	W0924 00:10:24.695818   31919 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0924 00:10:24.695842   31919 fix.go:56] duration metric: took 1m31.918823819s for fixHost
	I0924 00:10:24.695868   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.698746   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.699102   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.699132   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.699378   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.699601   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.699772   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.699888   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.700043   31919 main.go:141] libmachine: Using SSH client type: native
	I0924 00:10:24.700206   31919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0924 00:10:24.700217   31919 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:10:24.812998   31919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727136624.781175531
	
	I0924 00:10:24.813023   31919 fix.go:216] guest clock: 1727136624.781175531
	I0924 00:10:24.813030   31919 fix.go:229] Guest: 2024-09-24 00:10:24.781175531 +0000 UTC Remote: 2024-09-24 00:10:24.69584949 +0000 UTC m=+92.046503324 (delta=85.326041ms)
	I0924 00:10:24.813048   31919 fix.go:200] guest clock delta is within tolerance: 85.326041ms
	I0924 00:10:24.813052   31919 start.go:83] releasing machines lock for "ha-959539", held for 1m32.036051957s
	I0924 00:10:24.813069   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.813329   31919 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0924 00:10:24.816033   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.816431   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.816460   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.816643   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.817127   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.817304   31919 main.go:141] libmachine: (ha-959539) Calling .DriverName
	I0924 00:10:24.817424   31919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:10:24.817462   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.817485   31919 ssh_runner.go:195] Run: cat /version.json
	I0924 00:10:24.817506   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHHostname
	I0924 00:10:24.820119   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.820492   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.820594   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.820618   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.820889   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.821017   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:24.821040   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.821041   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:24.821151   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.821205   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHPort
	I0924 00:10:24.821285   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHKeyPath
	I0924 00:10:24.821287   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:10:24.821411   31919 main.go:141] libmachine: (ha-959539) Calling .GetSSHUsername
	I0924 00:10:24.821538   31919 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/ha-959539/id_rsa Username:docker}
	I0924 00:10:24.902013   31919 ssh_runner.go:195] Run: systemctl --version
	I0924 00:10:24.943589   31919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:10:25.105237   31919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:10:25.112718   31919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:10:25.112793   31919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:10:25.122522   31919 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 00:10:25.122553   31919 start.go:495] detecting cgroup driver to use...
	I0924 00:10:25.122617   31919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:10:25.139929   31919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:10:25.154794   31919 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:10:25.154865   31919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:10:25.169153   31919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:10:25.183526   31919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:10:25.334458   31919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:10:25.481882   31919 docker.go:233] disabling docker service ...
	I0924 00:10:25.481951   31919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:10:25.498553   31919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:10:25.513036   31919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:10:25.661545   31919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:10:25.811160   31919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:10:25.825234   31919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:10:25.844750   31919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:10:25.844812   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.855450   31919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:10:25.855507   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.866282   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.877559   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.888508   31919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:10:25.899815   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.910115   31919 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.921194   31919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:10:25.931764   31919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:10:25.941307   31919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:10:25.951115   31919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:10:26.095684   31919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:10:32.953417   31919 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.8576933s)
	I0924 00:10:32.953451   31919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:10:32.953499   31919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:10:32.958303   31919 start.go:563] Will wait 60s for crictl version
	I0924 00:10:32.958372   31919 ssh_runner.go:195] Run: which crictl
	I0924 00:10:32.962490   31919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:10:33.001527   31919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:10:33.001626   31919 ssh_runner.go:195] Run: crio --version
	I0924 00:10:33.033714   31919 ssh_runner.go:195] Run: crio --version
	I0924 00:10:33.064319   31919 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:10:33.065748   31919 main.go:141] libmachine: (ha-959539) Calling .GetIP
	I0924 00:10:33.068552   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:33.069091   31919 main.go:141] libmachine: (ha-959539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:17:69", ip: ""} in network mk-ha-959539: {Iface:virbr1 ExpiryTime:2024-09-24 00:59:41 +0000 UTC Type:0 Mac:52:54:00:99:17:69 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-959539 Clientid:01:52:54:00:99:17:69}
	I0924 00:10:33.069151   31919 main.go:141] libmachine: (ha-959539) DBG | domain ha-959539 has defined IP address 192.168.39.231 and MAC address 52:54:00:99:17:69 in network mk-ha-959539
	I0924 00:10:33.069468   31919 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:10:33.074592   31919 kubeadm.go:883] updating cluster {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:10:33.074730   31919 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:10:33.074768   31919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:10:33.117653   31919 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:10:33.117685   31919 crio.go:433] Images already preloaded, skipping extraction
	I0924 00:10:33.117750   31919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:10:33.155847   31919 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:10:33.155870   31919 cache_images.go:84] Images are preloaded, skipping loading
	I0924 00:10:33.155878   31919 kubeadm.go:934] updating node { 192.168.39.231 8443 v1.31.1 crio true true} ...
	I0924 00:10:33.155961   31919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-959539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:10:33.156018   31919 ssh_runner.go:195] Run: crio config
	I0924 00:10:33.200603   31919 cni.go:84] Creating CNI manager for ""
	I0924 00:10:33.200629   31919 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 00:10:33.200640   31919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:10:33.200661   31919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-959539 NodeName:ha-959539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 00:10:33.200793   31919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-959539"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:10:33.200812   31919 kube-vip.go:115] generating kube-vip config ...
	I0924 00:10:33.200851   31919 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0924 00:10:33.211732   31919 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0924 00:10:33.211837   31919 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0924 00:10:33.211888   31919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:10:33.220808   31919 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:10:33.220868   31919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0924 00:10:33.229881   31919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0924 00:10:33.246234   31919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:10:33.262833   31919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0924 00:10:33.278877   31919 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0924 00:10:33.294861   31919 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0924 00:10:33.299412   31919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:10:33.442681   31919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:10:33.457412   31919 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539 for IP: 192.168.39.231
	I0924 00:10:33.457439   31919 certs.go:194] generating shared ca certs ...
	I0924 00:10:33.457458   31919 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:10:33.457624   31919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:10:33.457672   31919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:10:33.457681   31919 certs.go:256] generating profile certs ...
	I0924 00:10:33.457751   31919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/client.key
	I0924 00:10:33.457776   31919 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7
	I0924 00:10:33.457788   31919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.231 192.168.39.71 192.168.39.244 192.168.39.254]
	I0924 00:10:33.593605   31919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7 ...
	I0924 00:10:33.593633   31919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7: {Name:mk28c0d2f20c537ae5dfc7e2724bfca944ff3319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:10:33.593793   31919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7 ...
	I0924 00:10:33.593803   31919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7: {Name:mk496e8508849330969fb494a01931fa5b69e592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:10:33.593870   31919 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt.9480d6e7 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt
	I0924 00:10:33.594029   31919 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key.9480d6e7 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key
	I0924 00:10:33.594150   31919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key
	I0924 00:10:33.594165   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:10:33.594181   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:10:33.594191   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:10:33.594202   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:10:33.594211   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:10:33.594220   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:10:33.594231   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:10:33.594240   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:10:33.594290   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:10:33.594316   31919 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:10:33.594325   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:10:33.594346   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:10:33.594367   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:10:33.594396   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:10:33.594434   31919 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:10:33.594469   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.594487   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.594499   31919 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.595063   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:10:33.620767   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:10:33.644715   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:10:33.668402   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:10:33.693706   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 00:10:33.717918   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:10:33.741152   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:10:33.764140   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/ha-959539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:10:33.787838   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:10:33.810766   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:10:33.834716   31919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:10:33.858277   31919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:10:33.874289   31919 ssh_runner.go:195] Run: openssl version
	I0924 00:10:33.880431   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:10:33.891818   31919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.896371   31919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.896430   31919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:10:33.902136   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:10:33.911125   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:10:33.921870   31919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.926543   31919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.926595   31919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:10:33.932079   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:10:33.942135   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:10:33.953072   31919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.957843   31919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.957904   31919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:10:33.964367   31919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:10:33.974503   31919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:10:33.979487   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 00:10:33.985486   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 00:10:33.991104   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 00:10:33.997231   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 00:10:34.003123   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 00:10:34.009050   31919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 00:10:34.014749   31919 kubeadm.go:392] StartCluster: {Name:ha-959539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-959539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:10:34.014875   31919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 00:10:34.014919   31919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:10:34.050735   31919 cri.go:89] found id: "b4946d654493630ff5fc1b26e79d378819aee7d8cc2c2b71c41e181e2c332b25"
	I0924 00:10:34.050755   31919 cri.go:89] found id: "086fc2e6e3fc0463dc06bea338d3ed77a46bbad21f29e0aea689de61a44231da"
	I0924 00:10:34.050759   31919 cri.go:89] found id: "a556fce95711333452f2b7846b2dd73b91597f96f301f1d6c58eea0c2726a46d"
	I0924 00:10:34.050762   31919 cri.go:89] found id: "1d2e00cf042e4362bbcfb0003da9c8309672413f33d340c23d7b1e058c24daaf"
	I0924 00:10:34.050764   31919 cri.go:89] found id: "05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137"
	I0924 00:10:34.050767   31919 cri.go:89] found id: "e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0"
	I0924 00:10:34.050770   31919 cri.go:89] found id: "1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2"
	I0924 00:10:34.050773   31919 cri.go:89] found id: "cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b"
	I0924 00:10:34.050775   31919 cri.go:89] found id: "b61587cd3ccea52e3762f607ce17d21719c646d22ac10052629a209fe6ddbf3c"
	I0924 00:10:34.050779   31919 cri.go:89] found id: "d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2"
	I0924 00:10:34.050782   31919 cri.go:89] found id: "af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd"
	I0924 00:10:34.050784   31919 cri.go:89] found id: "a42356ed739fd4c4bc65cb2d15edfb13fc395f88d73e9c25e9c7f9799ae6b974"
	I0924 00:10:34.050787   31919 cri.go:89] found id: "8c911375acec93e238f1022936d6afb98f697168fca75291f15649e13def2288"
	I0924 00:10:34.050789   31919 cri.go:89] found id: ""
	I0924 00:10:34.050829   31919 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.699427079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136936699401507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87070def-046a-4b3d-8560-345ba9d98d7f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.700051009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a44759f-6ca5-4dea-b374-677af38be181 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.700118978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a44759f-6ca5-4dea-b374-677af38be181 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.700600760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a44759f-6ca5-4dea-b374-677af38be181 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.745119329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8d72b5e-9b48-43a8-ad89-14ba27be2b5b name=/runtime.v1.RuntimeService/Version
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.745198517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8d72b5e-9b48-43a8-ad89-14ba27be2b5b name=/runtime.v1.RuntimeService/Version
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.747040262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3d746ed-ba8f-4d19-ad87-11400da726f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.747538358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136936747509902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3d746ed-ba8f-4d19-ad87-11400da726f3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.748058266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80268b91-90ce-4efd-b0b0-c2f5e03d0ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.748117476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80268b91-90ce-4efd-b0b0-c2f5e03d0ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.749078575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80268b91-90ce-4efd-b0b0-c2f5e03d0ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.800220477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d5500e9-07e0-42e9-b253-a1a0a383c863 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.800308069Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d5500e9-07e0-42e9-b253-a1a0a383c863 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.801575162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa2c3384-e687-4096-986e-9ffb57a79ba6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.802002560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136936801975162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa2c3384-e687-4096-986e-9ffb57a79ba6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.802629851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eae22735-2311-43aa-a5a7-f9d56e395645 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.802705355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eae22735-2311-43aa-a5a7-f9d56e395645 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.803149771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eae22735-2311-43aa-a5a7-f9d56e395645 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.843236989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1560ecde-f57e-41fa-a5fd-e9646bf9e770 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.843312242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1560ecde-f57e-41fa-a5fd-e9646bf9e770 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.845515410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13325d1e-88ea-4585-b6dd-384d87ccb482 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.845923955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136936845901266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13325d1e-88ea-4585-b6dd-384d87ccb482 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.846558194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db9b6563-1dca-4a68-af97-e4f8989a0c6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.846638492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db9b6563-1dca-4a68-af97-e4f8989a0c6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:15:36 ha-959539 crio[3592]: time="2024-09-24 00:15:36.847587853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6cc25f35ee1437daa875e1eee6b0bbe29eb9283f364454c77cdc95b603e0da70,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727136706535621117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727136685540687170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727136678531512871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f437caaea72c74e14426a3a5d2913e4c3f69650bf53bb46e41be66115f1f88a,PodSandboxId:3da9d510a2e3f65c60079eacff566cc95017e408c5c5b68936f2e102ca6c7558,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727136673877516694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f201ca9f86c3ee1ef172d0a96a5ca5f4056e61e94cbc5bf7dea44a37d228f9,PodSandboxId:3163f953f6982c63ca5ab90da8f9371af8261c3f4ca376574722bf3e0706135d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727136653670028384,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea107434f2f1e621a9033fe6f5f95874,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e619702803fa3c9e3a701e11565c5924685a4d9e4fda0f81632dd0d16c99888,PodSandboxId:088721be9ee42fa2a8167e644eb8620809d1363baceb8a78baced6e11009a7a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727136640741544088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7e0f07-8db9-4473-b3d2-c245c19d655b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c,PodSandboxId:cda71a093abf7d2216d63ad14b287a019fbabce932e68f02c474181d2a2ed584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136640675709785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379,PodSandboxId:7cd33d17d04b5fc9ab59007b4dd6459ff72e219c75f9906c91e71223d96b1795,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727136640589435675,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12f
aacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60,PodSandboxId:bcabc506d1d8f331f686da0a835c1c5e6f1dbfcb2368d41484e1f63a47044f74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727136640368124454,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337,PodSandboxId:4617c864ab9fe7164ee242155206c043c12dceb9e54b49e33f272ca7bfe824e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727136640471080966,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5ad980392063f2813e3d6dca84f9d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.containe
r.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a,PodSandboxId:295fd3d0be23e739bb1383b702ffabbc8fed80d71c3cdb86237e5f2093570f85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727136640490037983,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8862e93f261d2a6529b347e7a1404705,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e,PodSandboxId:fd7203009ec583739ec7b788cc0f9f8b444326fcbe5b568d40753eaaabc37d4e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727136640339454119,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376,PodSandboxId:8b7cd131c6f6e07093475020fddc9d65d34e92d529d1f89abf0021ffd29ae883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727136640359208308,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf,PodSandboxId:7f9f3a7de5177a2db72c3e19777a0711cf5f9ac36b424081aae8abdeedf10d9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727136635861760606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8646f943f6d158d9cb6123ee395d7f02fe8f4194ea968bf904f9d60ac4c8d1,PodSandboxId:4b5dbf2a2189385e09c02ad65761e1007bbf4b930164894bc8f1b76217964067,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727136176666162462,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7q7xr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ee31bb5-0c3d-4d9e-9c9e-32d3411bc68a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137,PodSandboxId:a91a16106518aeb7290ee145c6ebba24fbaf0ab1b928eb6005c2982202d15f69,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026589968172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nkbzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79bbcdf6-3ae9-4c2f-9d73-a990a069864f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0,PodSandboxId:1a4ee0160fc1d9dd6258f8fde766345d31e45e3e0d6790d4d9d5bd708cbcb206,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727136026542639422,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ss8lg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bd392b-d364-4a64-8fa0-852bb245aedc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2,PodSandboxId:1a380d04710836380fbd07e38a88bd6c32797798fac60cedb945001fcef619bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727136014418475831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qlqss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365f0414-b74d-42a8-be37-b0c8e03291ac,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b,PodSandboxId:72ade1a0510455fbb68e236046efac5db7e130775d8731e968c6403583d8f266,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727136014134621208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzklc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19af917f-9661-4577-92ed-8fc44b573c64,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2,PodSandboxId:40d143641822b8cfe35213ab0da141ef26cf5d327320371cdaf07dee367e1c67,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727136003255471651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2368dd5a295d09af7714df1de67bdb,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd,PodSandboxId:7328f59cdb9935ae3cc6db004e93f8c91143470c0fbb7d2f75380c3331d66ec6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727136003245833606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-959539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ee5d018ce106dd8052557f9d140def,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db9b6563-1dca-4a68-af97-e4f8989a0c6c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6cc25f35ee143       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   088721be9ee42       storage-provisioner
	7027a9fd4e666       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   4617c864ab9fe       kube-controller-manager-ha-959539
	d758bd78eee85       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   295fd3d0be23e       kube-apiserver-ha-959539
	7f437caaea72c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   3da9d510a2e3f       busybox-7dff88458-7q7xr
	18f201ca9f86c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   3163f953f6982       kube-vip-ha-959539
	9e619702803fa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   088721be9ee42       storage-provisioner
	2ee5e707e7481       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   cda71a093abf7       coredns-7c65d6cfc9-ss8lg
	24b63f0ac92bd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   7cd33d17d04b5       kube-scheduler-ha-959539
	a379b9ee451f6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Exited              kube-apiserver            2                   295fd3d0be23e       kube-apiserver-ha-959539
	521d145c726f0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Exited              kube-controller-manager   1                   4617c864ab9fe       kube-controller-manager-ha-959539
	e2ffd4f65dffe       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   bcabc506d1d8f       kube-proxy-qzklc
	4f62ad5889916       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   8b7cd131c6f6e       etcd-ha-959539
	851bbb971d887       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   fd7203009ec58       kindnet-qlqss
	59085f22fd0a5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   7f9f3a7de5177       coredns-7c65d6cfc9-nkbzw
	ae8646f943f6d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   4b5dbf2a21893       busybox-7dff88458-7q7xr
	05d43a4d13300       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   a91a16106518a       coredns-7c65d6cfc9-nkbzw
	e7a1a19a83d49       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   1a4ee0160fc1d       coredns-7c65d6cfc9-ss8lg
	1596300e66cf2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   1a380d0471083       kindnet-qlqss
	cdf912809c47a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   72ade1a051045       kube-proxy-qzklc
	d5459f3bc533d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   40d143641822b       etcd-ha-959539
	af224d12661c4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   7328f59cdb993       kube-scheduler-ha-959539
	
	
	==> coredns [05d43a4d133008f80d44c870878a45ceeb7e0e1adc3b08d1d47ce8e2edb36137] <==
	[INFO] 10.244.0.4:58501 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.017716872s
	[INFO] 10.244.0.4:37973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002021s
	[INFO] 10.244.0.4:43904 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156858s
	[INFO] 10.244.0.4:48352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163626s
	[INFO] 10.244.1.2:52896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132298s
	[INFO] 10.244.1.2:45449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227639s
	[INFO] 10.244.1.2:47616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017286s
	[INFO] 10.244.1.2:33521 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108761s
	[INFO] 10.244.1.2:43587 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012987s
	[INFO] 10.244.2.2:52394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001362s
	[INFO] 10.244.2.2:43819 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119859s
	[INFO] 10.244.2.2:35291 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097457s
	[INFO] 10.244.2.2:56966 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168721s
	[INFO] 10.244.0.4:52779 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102739s
	[INFO] 10.244.2.2:59382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000262295s
	[INFO] 10.244.2.2:44447 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133384s
	[INFO] 10.244.2.2:52951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170462s
	[INFO] 10.244.2.2:46956 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215226s
	[INFO] 10.244.2.2:53703 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108727s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1784&timeout=8m45s&timeoutSeconds=525&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1778&timeout=8m31s&timeoutSeconds=511&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1784": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1784": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2ee5e707e748130caca35ecd0ba876f633981823f6b1eafb8ba389d88783817c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1327334944]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 00:10:52.323) (total time: 11300ms):
	Trace[1327334944]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33856->10.96.0.1:443: read: connection reset by peer 11299ms (00:11:03.623)
	Trace[1327334944]: [11.300024519s] [11.300024519s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33856->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [59085f22fd0a54fea73c27cb9b9b7199313dabb8a9dbdbbe69a7810536b7ffaf] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[100585239]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (24-Sep-2024 00:10:45.160) (total time: 10001ms):
	Trace[100585239]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:10:55.161)
	Trace[100585239]: [10.001253763s] [10.001253763s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58276->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:58276->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58290->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58290->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e7a1a19a83d492cf0fda7b3bbf01fb7a12a73aac4c564ad4c2805be1c00817f0] <==
	[INFO] 10.244.0.4:43743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119977s
	[INFO] 10.244.1.2:32867 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192169s
	[INFO] 10.244.1.2:43403 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167697s
	[INFO] 10.244.1.2:57243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095722s
	[INFO] 10.244.1.2:48326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119715s
	[INFO] 10.244.2.2:49664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122596s
	[INFO] 10.244.2.2:40943 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106169s
	[INFO] 10.244.0.4:36066 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121758s
	[INFO] 10.244.0.4:51023 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156225s
	[INFO] 10.244.0.4:56715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125631s
	[INFO] 10.244.0.4:47944 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103261s
	[INFO] 10.244.1.2:49407 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148466s
	[INFO] 10.244.1.2:54979 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116145s
	[INFO] 10.244.1.2:47442 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097064s
	[INFO] 10.244.1.2:38143 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188037s
	[INFO] 10.244.2.2:40107 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086602s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1784&timeout=6m9s&timeoutSeconds=369&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1727&timeout=6m18s&timeoutSeconds=378&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1784&timeout=9m52s&timeoutSeconds=592&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-959539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_00_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:00:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:15:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:11:24 +0000   Tue, 24 Sep 2024 00:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-959539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a4b9ce5eed94a13bdbc682549e1dd1e
	  System UUID:                0a4b9ce5-eed9-4a13-bdbc-682549e1dd1e
	  Boot ID:                    679e0a2b-8772-4f6d-9e47-ba8190727387
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7q7xr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-nkbzw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-ss8lg             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-959539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-qlqss                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-959539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-959539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-qzklc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-959539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-959539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m12s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-959539 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           15m                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-959539 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-959539 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                15m                    kubelet          Node ha-959539 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Warning  ContainerGCFailed        5m25s (x2 over 6m25s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m14s (x3 over 6m3s)   kubelet          Node ha-959539 status is now: NodeNotReady
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-959539 event: Registered Node ha-959539 in Controller
	
	
	Name:               ha-959539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_01_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:01:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:15:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:12:04 +0000   Tue, 24 Sep 2024 00:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    ha-959539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f78cfc70aad42d195f1884fe3a82e21
	  System UUID:                0f78cfc7-0aad-42d1-95f1-884fe3a82e21
	  Boot ID:                    516209c9-4720-45b9-91d2-754ed4405940
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m5qhr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-959539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-cbrj7                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-959539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-959539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-2hlqx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-959539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-959539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-959539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-959539-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-959539-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-959539-m02 status is now: NodeNotReady
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node ha-959539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node ha-959539-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-959539-m02 event: Registered Node ha-959539-m02 in Controller
	
	
	Name:               ha-959539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-959539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=ha-959539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_03_32_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:03:31 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-959539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:13:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:13:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:13:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:13:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 00:12:49 +0000   Tue, 24 Sep 2024 00:13:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-959539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55d6e549bf6d4455bd4db681e2cc17b8
	  System UUID:                55d6e549-bf6d-4455-bd4d-b681e2cc17b8
	  Boot ID:                    767ecddb-37eb-4cca-8b96-d9c64515391e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vhqfp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-54xw8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-8h8qr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-959539-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-959539-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-959539-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-959539-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   NodeNotReady             3m43s                  node-controller  Node ha-959539-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-959539-m04 event: Registered Node ha-959539-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-959539-m04 has been rebooted, boot id: 767ecddb-37eb-4cca-8b96-d9c64515391e
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-959539-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-959539-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m48s                  kubelet          Node ha-959539-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-959539-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055717] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062835] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.175047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141488] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.281309] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.886660] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[Sep24 00:00] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.061155] kauditd_printk_skb: 158 callbacks suppressed
	[  +8.064379] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.136832] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +2.892614] kauditd_printk_skb: 43 callbacks suppressed
	[ +11.264409] kauditd_printk_skb: 15 callbacks suppressed
	[Sep24 00:01] kauditd_printk_skb: 26 callbacks suppressed
	[Sep24 00:07] kauditd_printk_skb: 1 callbacks suppressed
	[Sep24 00:10] systemd-fstab-generator[3515]: Ignoring "noauto" option for root device
	[  +0.152411] systemd-fstab-generator[3527]: Ignoring "noauto" option for root device
	[  +0.181272] systemd-fstab-generator[3541]: Ignoring "noauto" option for root device
	[  +0.145012] systemd-fstab-generator[3553]: Ignoring "noauto" option for root device
	[  +0.286016] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	[  +7.343411] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.088960] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.481341] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.213742] kauditd_printk_skb: 87 callbacks suppressed
	[Sep24 00:11] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.424429] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [4f62ad588991622e61011c3eb3160fd878786530c3bfe7b3b5ef9ed37255c376] <==
	{"level":"info","ts":"2024-09-24T00:12:10.586781Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:12:10.598532Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6a82bbfd8eee2a80","to":"a30d65a0357cca60","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-24T00:12:10.598577Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:12:11.356872Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-24T00:12:11.356967Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a30d65a0357cca60","rtt":"0s","error":"dial tcp 192.168.39.244:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-24T00:12:11.956294Z","caller":"traceutil/trace.go:171","msg":"trace[2032135025] transaction","detail":"{read_only:false; response_revision:2273; number_of_response:1; }","duration":"150.867941ms","start":"2024-09-24T00:12:11.805392Z","end":"2024-09-24T00:12:11.956260Z","steps":["trace[2032135025] 'process raft request'  (duration: 82.825099ms)","trace[2032135025] 'compare'  (duration: 67.801127ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:12:14.357294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.191491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-09-24T00:12:14.357415Z","caller":"traceutil/trace.go:171","msg":"trace[521575336] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2283; }","duration":"152.421988ms","start":"2024-09-24T00:12:14.204980Z","end":"2024-09-24T00:12:14.357402Z","steps":["trace[521575336] 'range keys from in-memory index tree'  (duration: 151.166203ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:13:03.541508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a82bbfd8eee2a80 switched to configuration voters=(7674903412691839616 17230736894093923586)"}
	{"level":"info","ts":"2024-09-24T00:13:03.543504Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1a20717615099fdd","local-member-id":"6a82bbfd8eee2a80","removed-remote-peer-id":"a30d65a0357cca60","removed-remote-peer-urls":["https://192.168.39.244:2380"]}
	{"level":"info","ts":"2024-09-24T00:13:03.543670Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:13:03.543850Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:13:03.543885Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:13:03.543988Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:13:03.544009Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:13:03.544250Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:13:03.544489Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60","error":"context canceled"}
	{"level":"warn","ts":"2024-09-24T00:13:03.544543Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"a30d65a0357cca60","error":"failed to read a30d65a0357cca60 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-24T00:13:03.544576Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:13:03.545057Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-09-24T00:13:03.545099Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:13:03.545113Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:13:03.545124Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6a82bbfd8eee2a80","removed-remote-peer-id":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:13:03.566912Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6a82bbfd8eee2a80","remote-peer-id-stream-handler":"6a82bbfd8eee2a80","remote-peer-id-from":"a30d65a0357cca60"}
	{"level":"warn","ts":"2024-09-24T00:13:03.567290Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6a82bbfd8eee2a80","remote-peer-id-stream-handler":"6a82bbfd8eee2a80","remote-peer-id-from":"a30d65a0357cca60"}
	
	
	==> etcd [d5459f3bc533d3928a2614c8aadaac043e36370d032cae8eb72d11664bbb74f2] <==
	{"level":"warn","ts":"2024-09-24T00:08:53.704667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T00:08:52.929656Z","time spent":"775.007777ms","remote":"127.0.0.1:42100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" limit:10000 "}
	2024/09/24 00:08:53 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-24T00:08:53.692283Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T00:08:52.940632Z","time spent":"751.645446ms","remote":"127.0.0.1:42340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:10000 "}
	2024/09/24 00:08:53 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-24T00:08:53.771502Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6a82bbfd8eee2a80","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-24T00:08:53.771782Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.771842Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.771922Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772050Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772157Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772261Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772366Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ef1fdfe9aeaf9502"}
	{"level":"info","ts":"2024-09-24T00:08:53.772431Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772474Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772561Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772689Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772761Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772825Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6a82bbfd8eee2a80","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.772838Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a30d65a0357cca60"}
	{"level":"info","ts":"2024-09-24T00:08:53.775688Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"warn","ts":"2024-09-24T00:08:53.775781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.851504369s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-24T00:08:53.775851Z","caller":"traceutil/trace.go:171","msg":"trace[1792382580] range","detail":"{range_begin:; range_end:; }","duration":"1.851644339s","start":"2024-09-24T00:08:51.924198Z","end":"2024-09-24T00:08:53.775842Z","steps":["trace[1792382580] 'agreement among raft nodes before linearized reading'  (duration: 1.851500901s)"],"step_count":1}
	{"level":"error","ts":"2024-09-24T00:08:53.775902Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-24T00:08:53.775795Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.231:2380"}
	{"level":"info","ts":"2024-09-24T00:08:53.775964Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-959539","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.231:2380"],"advertise-client-urls":["https://192.168.39.231:2379"]}
	
	
	==> kernel <==
	 00:15:37 up 16 min,  0 users,  load average: 0.13, 0.40, 0.28
	Linux ha-959539 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1596300e66cf2ee0f15d5a362238ef4e99cec8e83ea81be4670347659bbd88e2] <==
	I0924 00:08:25.413712       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:08:25.413817       1 main.go:299] handling current node
	I0924 00:08:25.413844       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:08:25.413862       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:08:25.414009       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:08:25.414031       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:08:25.414096       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:08:25.414114       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:08:35.413240       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:08:35.413283       1 main.go:299] handling current node
	I0924 00:08:35.413314       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:08:35.413319       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:08:35.413494       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:08:35.413511       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:08:35.413566       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:08:35.413582       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:08:45.417572       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:08:45.417635       1 main.go:299] handling current node
	I0924 00:08:45.417654       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:08:45.417663       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:08:45.417827       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0924 00:08:45.417854       1 main.go:322] Node ha-959539-m03 has CIDR [10.244.2.0/24] 
	I0924 00:08:45.417942       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:08:45.417967       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	E0924 00:08:51.930735       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [851bbb971d887973d5c8ec979d0dcd4d045dc540c80f4a14667f464726050b0e] <==
	I0924 00:14:51.425628       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:15:01.433917       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:15:01.434037       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:15:01.434182       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:15:01.434206       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:15:01.434267       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:15:01.434286       1 main.go:299] handling current node
	I0924 00:15:11.428471       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:15:11.428528       1 main.go:299] handling current node
	I0924 00:15:11.428544       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:15:11.428550       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:15:11.428715       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:15:11.428738       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:15:21.434438       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:15:21.434606       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:15:21.434855       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:15:21.434892       1 main.go:299] handling current node
	I0924 00:15:21.434919       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:15:21.434936       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	I0924 00:15:31.433485       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0924 00:15:31.433620       1 main.go:322] Node ha-959539-m04 has CIDR [10.244.3.0/24] 
	I0924 00:15:31.433787       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0924 00:15:31.433866       1 main.go:299] handling current node
	I0924 00:15:31.433935       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0924 00:15:31.433986       1 main.go:322] Node ha-959539-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a379b9ee451f6b18d5d78841a34fd308c5d6afe202d8dbc7c5e229edb0dd692a] <==
	I0924 00:10:41.000194       1 options.go:228] external host was not specified, using 192.168.39.231
	I0924 00:10:41.005156       1 server.go:142] Version: v1.31.1
	I0924 00:10:41.005221       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:10:41.365824       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0924 00:10:41.381961       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 00:10:41.386994       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0924 00:10:41.387035       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0924 00:10:41.387411       1 instance.go:232] Using reconciler: lease
	W0924 00:11:01.363900       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0924 00:11:01.364721       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0924 00:11:01.388849       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0924 00:11:01.388944       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d758bd78eee85d39c9fb81ffa0f75b1186b776425228dd8e7762ea1c90fa9048] <==
	I0924 00:11:20.445025       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 00:11:20.453059       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 00:11:20.453391       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 00:11:20.455074       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 00:11:20.455262       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 00:11:20.455289       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 00:11:20.455429       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0924 00:11:20.455979       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 00:11:20.457868       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 00:11:20.457924       1 aggregator.go:171] initial CRD sync complete...
	I0924 00:11:20.457962       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 00:11:20.457985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 00:11:20.458006       1 cache.go:39] Caches are synced for autoregister controller
	W0924 00:11:20.476896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244 192.168.39.71]
	I0924 00:11:20.499589       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0924 00:11:20.502957       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 00:11:20.503039       1 policy_source.go:224] refreshing policies
	I0924 00:11:20.553593       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 00:11:20.578963       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 00:11:20.588460       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0924 00:11:20.593058       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0924 00:11:21.361020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0924 00:11:21.808087       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.244 192.168.39.71]
	W0924 00:11:31.946753       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.71]
	W0924 00:13:11.816049       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.71]
	
	
	==> kube-controller-manager [521d145c726f059e9aaa8a8f52709d240ffeeff570c12816ccd0f9fca9fac337] <==
	I0924 00:10:41.944197       1 serving.go:386] Generated self-signed cert in-memory
	I0924 00:10:42.460845       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0924 00:10:42.460941       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:10:42.462471       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 00:10:42.462619       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 00:10:42.462767       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0924 00:10:42.462824       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0924 00:11:02.465465       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.231:8443/healthz\": dial tcp 192.168.39.231:8443: connect: connection refused"
	
	
	==> kube-controller-manager [7027a9fd4e6666930acaf7fbd168d3d7385b4b482312d2f87034bc24870a0357] <==
	I0924 00:13:00.439519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.634007ms"
	I0924 00:13:00.439628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.172µs"
	I0924 00:13:02.332227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.368µs"
	I0924 00:13:02.660933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.771µs"
	I0924 00:13:02.665131       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="90.578µs"
	I0924 00:13:05.229725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.195863ms"
	I0924 00:13:05.230041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.43µs"
	I0924 00:13:14.546320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m03"
	I0924 00:13:14.546863       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-959539-m04"
	E0924 00:13:27.902693       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:27.902812       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:27.902821       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:27.902827       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:27.902832       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:47.903243       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:47.903288       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:47.903295       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:47.903300       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	E0924 00:13:47.903305       1 gc_controller.go:151] "Failed to get node" err="node \"ha-959539-m03\" not found" logger="pod-garbage-collector-controller" node="ha-959539-m03"
	I0924 00:13:52.905377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:13:52.932138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:13:52.959235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.45298ms"
	I0924 00:13:52.959759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.615µs"
	I0924 00:13:55.132744       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	I0924 00:13:58.018316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-959539-m04"
	
	
	==> kube-proxy [cdf912809c47a4ae8dddad14a4f537540e25c97121c16fa62418fc1290a8b94b] <==
	E0924 00:07:36.903298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:39.974714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:39.974867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:39.974821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:39.974992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:43.110833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:43.111072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:46.182711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:46.182836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:49.256714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:49.256862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:49.257159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:49.257257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:58.472260       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:58.472477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:07:58.472784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:07:58.473286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:01.544035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:01.544239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:16.903787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:16.903928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:19.975971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:19.976271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1701\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0924 00:08:26.120189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0924 00:08:26.120318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-959539&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [e2ffd4f65dffe539e8bba5e57b28008ef75bbfa15d4c1e995ffc6b99603efe60] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:10:44.359946       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:10:47.432092       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:10:50.503399       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:10:56.646844       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:11:05.864838       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0924 00:11:24.296845       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-959539\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0924 00:11:24.296990       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0924 00:11:24.297135       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:11:24.372435       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:11:24.372530       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:11:24.372577       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:11:24.375072       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:11:24.375541       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:11:24.375577       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:11:24.377881       1 config.go:199] "Starting service config controller"
	I0924 00:11:24.377946       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:11:24.378002       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:11:24.378018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:11:24.379035       1 config.go:328] "Starting node config controller"
	I0924 00:11:24.379070       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:11:24.479068       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:11:24.479178       1 shared_informer.go:320] Caches are synced for node config
	I0924 00:11:24.479286       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [24b63f0ac92bd94c3f90ea5bc761bc7b4d3724f6ddbea71f1ad09960ca17e379] <==
	W0924 00:11:11.333216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.231:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:11.333405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.231:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:11.922948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.231:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:11.923021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.231:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.440523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.231:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.440678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.231:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.711835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.231:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.711883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.231:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.740977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.231:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.741035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.231:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.750966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.231:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.751008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.231:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:12.857706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.231:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:12.857772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.231:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:17.781248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.231:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:17.781396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.231:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:18.190692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.231:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.231:8443: connect: connection refused
	E0924 00:11:18.190740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.231:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.231:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:11:20.374239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 00:11:20.374577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 00:11:21.003980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0924 00:13:00.246492       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vhqfp\": pod busybox-7dff88458-vhqfp is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vhqfp" node="ha-959539-m04"
	E0924 00:13:00.246640       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8ee08c91-798d-468d-a9d0-fe80fb5d5397(default/busybox-7dff88458-vhqfp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-vhqfp"
	E0924 00:13:00.246689       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vhqfp\": pod busybox-7dff88458-vhqfp is already assigned to node \"ha-959539-m04\"" pod="default/busybox-7dff88458-vhqfp"
	I0924 00:13:00.246713       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vhqfp" node="ha-959539-m04"
	
	
	==> kube-scheduler [af224d12661c48914733f3b63e29d17a8fceaf62af777bb609750fb7c8dbd9fd] <==
	E0924 00:03:31.975081       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9594238c-336e-479f-8424-bf5663475f7d(kube-system/kube-proxy-h87p2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-h87p2"
	E0924 00:03:31.975198       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-h87p2\": pod kube-proxy-h87p2 is already assigned to node \"ha-959539-m04\"" pod="kube-system/kube-proxy-h87p2"
	I0924 00:03:31.975297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-h87p2" node="ha-959539-m04"
	E0924 00:03:32.025106       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zfglg" node="ha-959539-m04"
	E0924 00:03:32.025246       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zfglg\": pod kindnet-zfglg is already assigned to node \"ha-959539-m04\"" pod="kube-system/kindnet-zfglg"
	E0924 00:08:30.075812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:30.075915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0924 00:08:43.837663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0924 00:08:45.451114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0924 00:08:45.757200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0924 00:08:45.980962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:47.210556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0924 00:08:47.245778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0924 00:08:47.491539       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0924 00:08:47.711095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0924 00:08:48.100680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0924 00:08:48.487712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0924 00:08:48.702291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:50.030107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0924 00:08:51.029004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	W0924 00:08:51.119547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 00:08:51.119707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:08:52.150727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 00:08:52.150831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0924 00:08:53.686941       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 00:14:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:14:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:14:12 ha-959539 kubelet[1310]: E0924 00:14:12.764711    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136852760687077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:12 ha-959539 kubelet[1310]: E0924 00:14:12.764971    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136852760687077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:22 ha-959539 kubelet[1310]: E0924 00:14:22.766802    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136862766399024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:22 ha-959539 kubelet[1310]: E0924 00:14:22.766849    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136862766399024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:32 ha-959539 kubelet[1310]: E0924 00:14:32.769045    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136872768551195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:32 ha-959539 kubelet[1310]: E0924 00:14:32.769540    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136872768551195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:42 ha-959539 kubelet[1310]: E0924 00:14:42.771140    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136882770770844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:42 ha-959539 kubelet[1310]: E0924 00:14:42.771178    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136882770770844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:52 ha-959539 kubelet[1310]: E0924 00:14:52.773118    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136892772756861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:14:52 ha-959539 kubelet[1310]: E0924 00:14:52.773147    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136892772756861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:02 ha-959539 kubelet[1310]: E0924 00:15:02.774856    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136902774485455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:02 ha-959539 kubelet[1310]: E0924 00:15:02.774937    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136902774485455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:12 ha-959539 kubelet[1310]: E0924 00:15:12.545714    1310 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 00:15:12 ha-959539 kubelet[1310]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:15:12 ha-959539 kubelet[1310]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:15:12 ha-959539 kubelet[1310]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:15:12 ha-959539 kubelet[1310]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:15:12 ha-959539 kubelet[1310]: E0924 00:15:12.776598    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136912776252065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:12 ha-959539 kubelet[1310]: E0924 00:15:12.776729    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136912776252065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:22 ha-959539 kubelet[1310]: E0924 00:15:22.779065    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136922778758390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:22 ha-959539 kubelet[1310]: E0924 00:15:22.779090    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136922778758390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:32 ha-959539 kubelet[1310]: E0924 00:15:32.780178    1310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136932779933864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:15:32 ha-959539 kubelet[1310]: E0924 00:15:32.780218    1310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727136932779933864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:15:36.400223   34237 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19696-7623/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-959539 -n ha-959539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-959539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-246036
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-246036
E0924 00:30:43.334106   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-246036: exit status 82 (2m1.818282004s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-246036-m03"  ...
	* Stopping node "multinode-246036-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-246036" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-246036 --wait=true -v=8 --alsologtostderr
E0924 00:33:38.366152   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:35:43.333429   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-246036 --wait=true -v=8 --alsologtostderr: (3m23.647353881s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-246036
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-246036 -n multinode-246036
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-246036 logs -n 25: (1.534098136s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile589421806/001/cp-test_multinode-246036-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036:/home/docker/cp-test_multinode-246036-m02_multinode-246036.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036 sudo cat                                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m02_multinode-246036.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03:/home/docker/cp-test_multinode-246036-m02_multinode-246036-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036-m03 sudo cat                                   | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m02_multinode-246036-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp testdata/cp-test.txt                                                | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile589421806/001/cp-test_multinode-246036-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036:/home/docker/cp-test_multinode-246036-m03_multinode-246036.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036 sudo cat                                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m03_multinode-246036.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02:/home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036-m02 sudo cat                                   | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-246036 node stop m03                                                          | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	| node    | multinode-246036 node start                                                             | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-246036                                                                | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:30 UTC |                     |
	| stop    | -p multinode-246036                                                                     | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:30 UTC |                     |
	| start   | -p multinode-246036                                                                     | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:32 UTC | 24 Sep 24 00:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-246036                                                                | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:32:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:32:39.934227   44220 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:32:39.934369   44220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:32:39.934379   44220 out.go:358] Setting ErrFile to fd 2...
	I0924 00:32:39.934384   44220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:32:39.934577   44220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:32:39.935164   44220 out.go:352] Setting JSON to false
	I0924 00:32:39.936060   44220 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4504,"bootTime":1727133456,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:32:39.936126   44220 start.go:139] virtualization: kvm guest
	I0924 00:32:39.938508   44220 out.go:177] * [multinode-246036] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:32:39.939925   44220 notify.go:220] Checking for updates...
	I0924 00:32:39.939953   44220 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:32:39.941274   44220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:32:39.942626   44220 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:32:39.943910   44220 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:32:39.945348   44220 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:32:39.946838   44220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:32:39.948798   44220 config.go:182] Loaded profile config "multinode-246036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:32:39.948937   44220 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:32:39.949437   44220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:32:39.949508   44220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:32:39.965254   44220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I0924 00:32:39.965839   44220 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:32:39.966463   44220 main.go:141] libmachine: Using API Version  1
	I0924 00:32:39.966492   44220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:32:39.966799   44220 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:32:39.966977   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:32:40.003475   44220 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 00:32:40.004844   44220 start.go:297] selected driver: kvm2
	I0924 00:32:40.004862   44220 start.go:901] validating driver "kvm2" against &{Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:32:40.005023   44220 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:32:40.005431   44220 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:32:40.005520   44220 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:32:40.020825   44220 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:32:40.021581   44220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:32:40.021615   44220 cni.go:84] Creating CNI manager for ""
	I0924 00:32:40.021670   44220 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 00:32:40.021739   44220 start.go:340] cluster config:
	{Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-246036 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:32:40.021904   44220 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:32:40.023812   44220 out.go:177] * Starting "multinode-246036" primary control-plane node in "multinode-246036" cluster
	I0924 00:32:40.025116   44220 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:32:40.025187   44220 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 00:32:40.025202   44220 cache.go:56] Caching tarball of preloaded images
	I0924 00:32:40.025294   44220 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:32:40.025309   44220 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:32:40.025454   44220 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/config.json ...
	I0924 00:32:40.025726   44220 start.go:360] acquireMachinesLock for multinode-246036: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:32:40.025791   44220 start.go:364] duration metric: took 42.687µs to acquireMachinesLock for "multinode-246036"
	I0924 00:32:40.025812   44220 start.go:96] Skipping create...Using existing machine configuration
	I0924 00:32:40.025824   44220 fix.go:54] fixHost starting: 
	I0924 00:32:40.026117   44220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:32:40.026161   44220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:32:40.040677   44220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43741
	I0924 00:32:40.041099   44220 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:32:40.041586   44220 main.go:141] libmachine: Using API Version  1
	I0924 00:32:40.041606   44220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:32:40.041912   44220 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:32:40.042076   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:32:40.042213   44220 main.go:141] libmachine: (multinode-246036) Calling .GetState
	I0924 00:32:40.043793   44220 fix.go:112] recreateIfNeeded on multinode-246036: state=Running err=<nil>
	W0924 00:32:40.043814   44220 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 00:32:40.045897   44220 out.go:177] * Updating the running kvm2 "multinode-246036" VM ...
	I0924 00:32:40.047284   44220 machine.go:93] provisionDockerMachine start ...
	I0924 00:32:40.047326   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:32:40.047560   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.050211   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.050635   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.050656   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.050806   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.051004   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.051163   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.051294   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.051484   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.051743   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.051756   44220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 00:32:40.173150   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-246036
	
	I0924 00:32:40.173187   44220 main.go:141] libmachine: (multinode-246036) Calling .GetMachineName
	I0924 00:32:40.173455   44220 buildroot.go:166] provisioning hostname "multinode-246036"
	I0924 00:32:40.173484   44220 main.go:141] libmachine: (multinode-246036) Calling .GetMachineName
	I0924 00:32:40.173693   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.176892   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.177279   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.177326   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.177471   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.177642   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.177781   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.177891   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.178119   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.178349   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.178367   44220 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-246036 && echo "multinode-246036" | sudo tee /etc/hostname
	I0924 00:32:40.308737   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-246036
	
	I0924 00:32:40.308766   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.312014   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.312372   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.312396   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.312605   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.312803   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.312939   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.313065   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.313201   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.313409   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.313432   44220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-246036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-246036/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-246036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:32:40.429366   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:32:40.429412   44220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:32:40.429443   44220 buildroot.go:174] setting up certificates
	I0924 00:32:40.429456   44220 provision.go:84] configureAuth start
	I0924 00:32:40.429470   44220 main.go:141] libmachine: (multinode-246036) Calling .GetMachineName
	I0924 00:32:40.429787   44220 main.go:141] libmachine: (multinode-246036) Calling .GetIP
	I0924 00:32:40.432741   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.433176   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.433202   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.433431   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.436028   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.436439   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.436470   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.436648   44220 provision.go:143] copyHostCerts
	I0924 00:32:40.436673   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:32:40.436702   44220 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:32:40.436718   44220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:32:40.436786   44220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:32:40.436875   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:32:40.436893   44220 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:32:40.436899   44220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:32:40.436923   44220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:32:40.436974   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:32:40.436991   44220 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:32:40.436997   44220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:32:40.437022   44220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:32:40.437079   44220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.multinode-246036 san=[127.0.0.1 192.168.39.199 localhost minikube multinode-246036]
	I0924 00:32:40.702282   44220 provision.go:177] copyRemoteCerts
	I0924 00:32:40.702344   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:32:40.702368   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.705236   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.705624   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.705660   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.705886   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.706097   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.706269   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.706424   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:32:40.795328   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:32:40.795480   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:32:40.819951   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:32:40.820023   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:32:40.850250   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:32:40.850316   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0924 00:32:40.876371   44220 provision.go:87] duration metric: took 446.90231ms to configureAuth
	I0924 00:32:40.876397   44220 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:32:40.876645   44220 config.go:182] Loaded profile config "multinode-246036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:32:40.876735   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.879515   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.879901   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.879930   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.880059   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.880259   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.880457   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.880618   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.880784   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.880981   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.881003   44220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:34:11.597746   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:34:11.597777   44220 machine.go:96] duration metric: took 1m31.550470894s to provisionDockerMachine
	I0924 00:34:11.597790   44220 start.go:293] postStartSetup for "multinode-246036" (driver="kvm2")
	I0924 00:34:11.597800   44220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:34:11.597817   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.598163   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:34:11.598198   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.601395   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.601884   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.601913   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.602142   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.602358   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.602511   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.602645   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:34:11.691639   44220 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:34:11.695641   44220 command_runner.go:130] > NAME=Buildroot
	I0924 00:34:11.695670   44220 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0924 00:34:11.695676   44220 command_runner.go:130] > ID=buildroot
	I0924 00:34:11.695682   44220 command_runner.go:130] > VERSION_ID=2023.02.9
	I0924 00:34:11.695690   44220 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0924 00:34:11.695738   44220 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:34:11.695754   44220 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:34:11.695823   44220 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:34:11.696038   44220 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:34:11.696060   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:34:11.696231   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:34:11.705072   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:34:11.727530   44220 start.go:296] duration metric: took 129.72707ms for postStartSetup
	I0924 00:34:11.727573   44220 fix.go:56] duration metric: took 1m31.701750109s for fixHost
	I0924 00:34:11.727603   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.730147   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.730774   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.730808   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.731028   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.731183   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.731328   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.731465   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.731623   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:34:11.731834   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:34:11.731849   44220 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:34:11.844908   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727138051.820284302
	
	I0924 00:34:11.844936   44220 fix.go:216] guest clock: 1727138051.820284302
	I0924 00:34:11.844945   44220 fix.go:229] Guest: 2024-09-24 00:34:11.820284302 +0000 UTC Remote: 2024-09-24 00:34:11.72757903 +0000 UTC m=+91.827986289 (delta=92.705272ms)
	I0924 00:34:11.844973   44220 fix.go:200] guest clock delta is within tolerance: 92.705272ms
	I0924 00:34:11.844979   44220 start.go:83] releasing machines lock for "multinode-246036", held for 1m31.819175531s
	I0924 00:34:11.845001   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.845332   44220 main.go:141] libmachine: (multinode-246036) Calling .GetIP
	I0924 00:34:11.848206   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.848578   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.848605   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.848760   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.849206   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.849360   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.849456   44220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:34:11.849505   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.849565   44220 ssh_runner.go:195] Run: cat /version.json
	I0924 00:34:11.849589   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.852056   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852237   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852485   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.852518   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852668   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.852818   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.852843   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852822   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.853018   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.853019   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.853199   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.853202   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:34:11.853376   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.853529   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:34:11.933386   44220 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0924 00:34:11.933563   44220 ssh_runner.go:195] Run: systemctl --version
	I0924 00:34:11.974989   44220 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0924 00:34:11.975682   44220 command_runner.go:130] > systemd 252 (252)
	I0924 00:34:11.975726   44220 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0924 00:34:11.975796   44220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:34:12.138827   44220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 00:34:12.145448   44220 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0924 00:34:12.145882   44220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:34:12.145948   44220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:34:12.154615   44220 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 00:34:12.154643   44220 start.go:495] detecting cgroup driver to use...
	I0924 00:34:12.154720   44220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:34:12.171123   44220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:34:12.184526   44220 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:34:12.184588   44220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:34:12.197873   44220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:34:12.211011   44220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:34:12.365119   44220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:34:12.511923   44220 docker.go:233] disabling docker service ...
	I0924 00:34:12.512004   44220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:34:12.527671   44220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:34:12.540663   44220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:34:12.676686   44220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:34:12.814203   44220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:34:12.827957   44220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:34:12.845948   44220 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0924 00:34:12.846434   44220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:34:12.846503   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.856583   44220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:34:12.856640   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.866577   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.876324   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.885996   44220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:34:12.896677   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.906102   44220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.916474   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.926423   44220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:34:12.935892   44220 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0924 00:34:12.936009   44220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:34:12.945361   44220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:34:13.101417   44220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:34:17.018630   44220 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.917174045s)
	I0924 00:34:17.018669   44220 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:34:17.018727   44220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:34:17.023611   44220 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0924 00:34:17.023638   44220 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0924 00:34:17.023648   44220 command_runner.go:130] > Device: 0,22	Inode: 1385        Links: 1
	I0924 00:34:17.023658   44220 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 00:34:17.023666   44220 command_runner.go:130] > Access: 2024-09-24 00:34:16.919541474 +0000
	I0924 00:34:17.023674   44220 command_runner.go:130] > Modify: 2024-09-24 00:34:16.882539011 +0000
	I0924 00:34:17.023682   44220 command_runner.go:130] > Change: 2024-09-24 00:34:16.882539011 +0000
	I0924 00:34:17.023693   44220 command_runner.go:130] >  Birth: -
	I0924 00:34:17.023715   44220 start.go:563] Will wait 60s for crictl version
	I0924 00:34:17.023760   44220 ssh_runner.go:195] Run: which crictl
	I0924 00:34:17.027738   44220 command_runner.go:130] > /usr/bin/crictl
	I0924 00:34:17.027810   44220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:34:17.068130   44220 command_runner.go:130] > Version:  0.1.0
	I0924 00:34:17.068158   44220 command_runner.go:130] > RuntimeName:  cri-o
	I0924 00:34:17.068164   44220 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0924 00:34:17.068171   44220 command_runner.go:130] > RuntimeApiVersion:  v1
	I0924 00:34:17.069157   44220 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:34:17.069225   44220 ssh_runner.go:195] Run: crio --version
	I0924 00:34:17.099071   44220 command_runner.go:130] > crio version 1.29.1
	I0924 00:34:17.099100   44220 command_runner.go:130] > Version:        1.29.1
	I0924 00:34:17.099109   44220 command_runner.go:130] > GitCommit:      unknown
	I0924 00:34:17.099120   44220 command_runner.go:130] > GitCommitDate:  unknown
	I0924 00:34:17.099126   44220 command_runner.go:130] > GitTreeState:   clean
	I0924 00:34:17.099134   44220 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 00:34:17.099140   44220 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 00:34:17.099145   44220 command_runner.go:130] > Compiler:       gc
	I0924 00:34:17.099151   44220 command_runner.go:130] > Platform:       linux/amd64
	I0924 00:34:17.099157   44220 command_runner.go:130] > Linkmode:       dynamic
	I0924 00:34:17.099180   44220 command_runner.go:130] > BuildTags:      
	I0924 00:34:17.099192   44220 command_runner.go:130] >   containers_image_ostree_stub
	I0924 00:34:17.099199   44220 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 00:34:17.099204   44220 command_runner.go:130] >   btrfs_noversion
	I0924 00:34:17.099212   44220 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 00:34:17.099223   44220 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 00:34:17.099229   44220 command_runner.go:130] >   seccomp
	I0924 00:34:17.099240   44220 command_runner.go:130] > LDFlags:          unknown
	I0924 00:34:17.099248   44220 command_runner.go:130] > SeccompEnabled:   true
	I0924 00:34:17.099253   44220 command_runner.go:130] > AppArmorEnabled:  false
	I0924 00:34:17.099320   44220 ssh_runner.go:195] Run: crio --version
	I0924 00:34:17.134179   44220 command_runner.go:130] > crio version 1.29.1
	I0924 00:34:17.134210   44220 command_runner.go:130] > Version:        1.29.1
	I0924 00:34:17.134220   44220 command_runner.go:130] > GitCommit:      unknown
	I0924 00:34:17.134228   44220 command_runner.go:130] > GitCommitDate:  unknown
	I0924 00:34:17.134236   44220 command_runner.go:130] > GitTreeState:   clean
	I0924 00:34:17.134245   44220 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 00:34:17.134259   44220 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 00:34:17.134271   44220 command_runner.go:130] > Compiler:       gc
	I0924 00:34:17.134279   44220 command_runner.go:130] > Platform:       linux/amd64
	I0924 00:34:17.134287   44220 command_runner.go:130] > Linkmode:       dynamic
	I0924 00:34:17.134299   44220 command_runner.go:130] > BuildTags:      
	I0924 00:34:17.134309   44220 command_runner.go:130] >   containers_image_ostree_stub
	I0924 00:34:17.134315   44220 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 00:34:17.134320   44220 command_runner.go:130] >   btrfs_noversion
	I0924 00:34:17.134328   44220 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 00:34:17.134342   44220 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 00:34:17.134352   44220 command_runner.go:130] >   seccomp
	I0924 00:34:17.134360   44220 command_runner.go:130] > LDFlags:          unknown
	I0924 00:34:17.134368   44220 command_runner.go:130] > SeccompEnabled:   true
	I0924 00:34:17.134379   44220 command_runner.go:130] > AppArmorEnabled:  false
	I0924 00:34:17.136353   44220 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:34:17.137483   44220 main.go:141] libmachine: (multinode-246036) Calling .GetIP
	I0924 00:34:17.140139   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:17.140497   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:17.140526   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:17.140861   44220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:34:17.144846   44220 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0924 00:34:17.144984   44220 kubeadm.go:883] updating cluster {Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:34:17.145132   44220 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:34:17.145181   44220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:34:17.184105   44220 command_runner.go:130] > {
	I0924 00:34:17.184134   44220 command_runner.go:130] >   "images": [
	I0924 00:34:17.184141   44220 command_runner.go:130] >     {
	I0924 00:34:17.184153   44220 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 00:34:17.184158   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184164   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 00:34:17.184168   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184173   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184180   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 00:34:17.184188   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 00:34:17.184192   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184197   44220 command_runner.go:130] >       "size": "87190579",
	I0924 00:34:17.184201   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184208   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184222   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184233   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184242   44220 command_runner.go:130] >     },
	I0924 00:34:17.184247   44220 command_runner.go:130] >     {
	I0924 00:34:17.184257   44220 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 00:34:17.184264   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184276   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 00:34:17.184282   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184288   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184299   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 00:34:17.184314   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 00:34:17.184325   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184350   44220 command_runner.go:130] >       "size": "1363676",
	I0924 00:34:17.184360   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184372   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184381   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184389   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184393   44220 command_runner.go:130] >     },
	I0924 00:34:17.184397   44220 command_runner.go:130] >     {
	I0924 00:34:17.184403   44220 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 00:34:17.184412   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184423   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 00:34:17.184432   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184441   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184456   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 00:34:17.184471   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 00:34:17.184480   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184488   44220 command_runner.go:130] >       "size": "31470524",
	I0924 00:34:17.184492   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184501   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184510   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184520   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184529   44220 command_runner.go:130] >     },
	I0924 00:34:17.184537   44220 command_runner.go:130] >     {
	I0924 00:34:17.184549   44220 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 00:34:17.184559   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184569   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 00:34:17.184575   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184579   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184591   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 00:34:17.184610   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 00:34:17.184622   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184631   44220 command_runner.go:130] >       "size": "63273227",
	I0924 00:34:17.184641   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184650   44220 command_runner.go:130] >       "username": "nonroot",
	I0924 00:34:17.184659   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184666   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184672   44220 command_runner.go:130] >     },
	I0924 00:34:17.184681   44220 command_runner.go:130] >     {
	I0924 00:34:17.184691   44220 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 00:34:17.184701   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184712   44220 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 00:34:17.184720   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184730   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184742   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 00:34:17.184753   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 00:34:17.184761   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184771   44220 command_runner.go:130] >       "size": "149009664",
	I0924 00:34:17.184780   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.184787   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.184795   44220 command_runner.go:130] >       },
	I0924 00:34:17.184804   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184812   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184821   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184828   44220 command_runner.go:130] >     },
	I0924 00:34:17.184830   44220 command_runner.go:130] >     {
	I0924 00:34:17.184839   44220 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 00:34:17.184848   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184859   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 00:34:17.184868   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184877   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184889   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 00:34:17.184903   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 00:34:17.184911   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184919   44220 command_runner.go:130] >       "size": "95237600",
	I0924 00:34:17.184925   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.184934   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.184942   44220 command_runner.go:130] >       },
	I0924 00:34:17.184952   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184961   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184970   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184975   44220 command_runner.go:130] >     },
	I0924 00:34:17.184984   44220 command_runner.go:130] >     {
	I0924 00:34:17.184996   44220 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 00:34:17.185002   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185010   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 00:34:17.185018   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185034   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185048   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 00:34:17.185066   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 00:34:17.185075   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185081   44220 command_runner.go:130] >       "size": "89437508",
	I0924 00:34:17.185088   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.185094   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.185102   44220 command_runner.go:130] >       },
	I0924 00:34:17.185111   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185121   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185130   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.185138   44220 command_runner.go:130] >     },
	I0924 00:34:17.185146   44220 command_runner.go:130] >     {
	I0924 00:34:17.185157   44220 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 00:34:17.185165   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185173   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 00:34:17.185177   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185185   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185207   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 00:34:17.185221   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 00:34:17.185231   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185241   44220 command_runner.go:130] >       "size": "92733849",
	I0924 00:34:17.185250   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.185257   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185261   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185266   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.185289   44220 command_runner.go:130] >     },
	I0924 00:34:17.185295   44220 command_runner.go:130] >     {
	I0924 00:34:17.185305   44220 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 00:34:17.185312   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185319   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 00:34:17.185327   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185337   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185360   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 00:34:17.185376   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 00:34:17.185384   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185391   44220 command_runner.go:130] >       "size": "68420934",
	I0924 00:34:17.185397   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.185404   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.185412   44220 command_runner.go:130] >       },
	I0924 00:34:17.185419   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185426   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185430   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.185433   44220 command_runner.go:130] >     },
	I0924 00:34:17.185439   44220 command_runner.go:130] >     {
	I0924 00:34:17.185452   44220 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 00:34:17.185463   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185470   44220 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 00:34:17.185474   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185480   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185490   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 00:34:17.185502   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 00:34:17.185511   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185521   44220 command_runner.go:130] >       "size": "742080",
	I0924 00:34:17.185529   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.185534   44220 command_runner.go:130] >         "value": "65535"
	I0924 00:34:17.185540   44220 command_runner.go:130] >       },
	I0924 00:34:17.185548   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185554   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185561   44220 command_runner.go:130] >       "pinned": true
	I0924 00:34:17.185567   44220 command_runner.go:130] >     }
	I0924 00:34:17.185573   44220 command_runner.go:130] >   ]
	I0924 00:34:17.185578   44220 command_runner.go:130] > }
	I0924 00:34:17.185821   44220 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:34:17.185836   44220 crio.go:433] Images already preloaded, skipping extraction
	I0924 00:34:17.185906   44220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:34:17.216110   44220 command_runner.go:130] > {
	I0924 00:34:17.216132   44220 command_runner.go:130] >   "images": [
	I0924 00:34:17.216139   44220 command_runner.go:130] >     {
	I0924 00:34:17.216150   44220 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 00:34:17.216157   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216164   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 00:34:17.216168   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216175   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216185   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 00:34:17.216196   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 00:34:17.216205   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216213   44220 command_runner.go:130] >       "size": "87190579",
	I0924 00:34:17.216220   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216226   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216250   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216290   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216299   44220 command_runner.go:130] >     },
	I0924 00:34:17.216306   44220 command_runner.go:130] >     {
	I0924 00:34:17.216316   44220 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 00:34:17.216335   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216346   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 00:34:17.216354   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216362   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216374   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 00:34:17.216387   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 00:34:17.216394   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216405   44220 command_runner.go:130] >       "size": "1363676",
	I0924 00:34:17.216413   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216424   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216433   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216445   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216456   44220 command_runner.go:130] >     },
	I0924 00:34:17.216463   44220 command_runner.go:130] >     {
	I0924 00:34:17.216476   44220 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 00:34:17.216486   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216500   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 00:34:17.216512   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216518   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216531   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 00:34:17.216547   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 00:34:17.216556   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216564   44220 command_runner.go:130] >       "size": "31470524",
	I0924 00:34:17.216573   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216581   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216590   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216597   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216605   44220 command_runner.go:130] >     },
	I0924 00:34:17.216610   44220 command_runner.go:130] >     {
	I0924 00:34:17.216621   44220 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 00:34:17.216630   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216639   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 00:34:17.216648   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216655   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216671   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 00:34:17.216690   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 00:34:17.216698   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216706   44220 command_runner.go:130] >       "size": "63273227",
	I0924 00:34:17.216715   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216724   44220 command_runner.go:130] >       "username": "nonroot",
	I0924 00:34:17.216738   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216749   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216756   44220 command_runner.go:130] >     },
	I0924 00:34:17.216765   44220 command_runner.go:130] >     {
	I0924 00:34:17.216776   44220 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 00:34:17.216785   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216796   44220 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 00:34:17.216805   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216811   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216826   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 00:34:17.216841   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 00:34:17.216849   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216857   44220 command_runner.go:130] >       "size": "149009664",
	I0924 00:34:17.216866   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.216873   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.216881   44220 command_runner.go:130] >       },
	I0924 00:34:17.216889   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216899   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216908   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216914   44220 command_runner.go:130] >     },
	I0924 00:34:17.216923   44220 command_runner.go:130] >     {
	I0924 00:34:17.216935   44220 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 00:34:17.216945   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216955   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 00:34:17.216963   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216971   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216986   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 00:34:17.217000   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 00:34:17.217009   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217016   44220 command_runner.go:130] >       "size": "95237600",
	I0924 00:34:17.217025   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217032   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.217041   44220 command_runner.go:130] >       },
	I0924 00:34:17.217049   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217058   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217066   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217074   44220 command_runner.go:130] >     },
	I0924 00:34:17.217081   44220 command_runner.go:130] >     {
	I0924 00:34:17.217093   44220 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 00:34:17.217101   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217111   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 00:34:17.217120   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217128   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217144   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 00:34:17.217159   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 00:34:17.217170   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217180   44220 command_runner.go:130] >       "size": "89437508",
	I0924 00:34:17.217188   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217198   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.217207   44220 command_runner.go:130] >       },
	I0924 00:34:17.217214   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217225   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217236   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217245   44220 command_runner.go:130] >     },
	I0924 00:34:17.217251   44220 command_runner.go:130] >     {
	I0924 00:34:17.217262   44220 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 00:34:17.217276   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217286   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 00:34:17.217294   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217301   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217349   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 00:34:17.217365   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 00:34:17.217371   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217377   44220 command_runner.go:130] >       "size": "92733849",
	I0924 00:34:17.217385   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.217394   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217403   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217412   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217426   44220 command_runner.go:130] >     },
	I0924 00:34:17.217435   44220 command_runner.go:130] >     {
	I0924 00:34:17.217446   44220 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 00:34:17.217455   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217464   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 00:34:17.217472   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217479   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217494   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 00:34:17.217510   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 00:34:17.217518   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217526   44220 command_runner.go:130] >       "size": "68420934",
	I0924 00:34:17.217534   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217542   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.217551   44220 command_runner.go:130] >       },
	I0924 00:34:17.217560   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217569   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217579   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217587   44220 command_runner.go:130] >     },
	I0924 00:34:17.217593   44220 command_runner.go:130] >     {
	I0924 00:34:17.217606   44220 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 00:34:17.217615   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217624   44220 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 00:34:17.217632   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217640   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217654   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 00:34:17.217673   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 00:34:17.217681   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217688   44220 command_runner.go:130] >       "size": "742080",
	I0924 00:34:17.217697   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217705   44220 command_runner.go:130] >         "value": "65535"
	I0924 00:34:17.217714   44220 command_runner.go:130] >       },
	I0924 00:34:17.217723   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217732   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217739   44220 command_runner.go:130] >       "pinned": true
	I0924 00:34:17.217748   44220 command_runner.go:130] >     }
	I0924 00:34:17.217754   44220 command_runner.go:130] >   ]
	I0924 00:34:17.217762   44220 command_runner.go:130] > }
	I0924 00:34:17.218227   44220 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:34:17.218242   44220 cache_images.go:84] Images are preloaded, skipping loading
	I0924 00:34:17.218250   44220 kubeadm.go:934] updating node { 192.168.39.199 8443 v1.31.1 crio true true} ...
	I0924 00:34:17.218386   44220 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-246036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:34:17.218459   44220 ssh_runner.go:195] Run: crio config
	I0924 00:34:17.259799   44220 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0924 00:34:17.259829   44220 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0924 00:34:17.259841   44220 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0924 00:34:17.259846   44220 command_runner.go:130] > #
	I0924 00:34:17.259856   44220 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0924 00:34:17.259865   44220 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0924 00:34:17.259874   44220 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0924 00:34:17.259886   44220 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0924 00:34:17.259893   44220 command_runner.go:130] > # reload'.
	I0924 00:34:17.259907   44220 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0924 00:34:17.259917   44220 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0924 00:34:17.259930   44220 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0924 00:34:17.259939   44220 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0924 00:34:17.259948   44220 command_runner.go:130] > [crio]
	I0924 00:34:17.259956   44220 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0924 00:34:17.259966   44220 command_runner.go:130] > # containers images, in this directory.
	I0924 00:34:17.259974   44220 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0924 00:34:17.259995   44220 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0924 00:34:17.260008   44220 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0924 00:34:17.260019   44220 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0924 00:34:17.260027   44220 command_runner.go:130] > # imagestore = ""
	I0924 00:34:17.260039   44220 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0924 00:34:17.260052   44220 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0924 00:34:17.260062   44220 command_runner.go:130] > storage_driver = "overlay"
	I0924 00:34:17.260071   44220 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0924 00:34:17.260084   44220 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0924 00:34:17.260094   44220 command_runner.go:130] > storage_option = [
	I0924 00:34:17.260101   44220 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0924 00:34:17.260109   44220 command_runner.go:130] > ]
	I0924 00:34:17.260121   44220 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0924 00:34:17.260136   44220 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0924 00:34:17.260143   44220 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0924 00:34:17.260152   44220 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0924 00:34:17.260166   44220 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0924 00:34:17.260174   44220 command_runner.go:130] > # always happen on a node reboot
	I0924 00:34:17.260183   44220 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0924 00:34:17.260203   44220 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0924 00:34:17.260215   44220 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0924 00:34:17.260228   44220 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0924 00:34:17.260236   44220 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0924 00:34:17.260250   44220 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0924 00:34:17.260265   44220 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0924 00:34:17.260272   44220 command_runner.go:130] > # internal_wipe = true
	I0924 00:34:17.260283   44220 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0924 00:34:17.260293   44220 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0924 00:34:17.260307   44220 command_runner.go:130] > # internal_repair = false
	I0924 00:34:17.260319   44220 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0924 00:34:17.260347   44220 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0924 00:34:17.260358   44220 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0924 00:34:17.260370   44220 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0924 00:34:17.260382   44220 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0924 00:34:17.260387   44220 command_runner.go:130] > [crio.api]
	I0924 00:34:17.260400   44220 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0924 00:34:17.260408   44220 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0924 00:34:17.260421   44220 command_runner.go:130] > # IP address on which the stream server will listen.
	I0924 00:34:17.260429   44220 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0924 00:34:17.260440   44220 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0924 00:34:17.260450   44220 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0924 00:34:17.260456   44220 command_runner.go:130] > # stream_port = "0"
	I0924 00:34:17.260466   44220 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0924 00:34:17.260473   44220 command_runner.go:130] > # stream_enable_tls = false
	I0924 00:34:17.260483   44220 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0924 00:34:17.260492   44220 command_runner.go:130] > # stream_idle_timeout = ""
	I0924 00:34:17.260505   44220 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0924 00:34:17.260515   44220 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0924 00:34:17.260525   44220 command_runner.go:130] > # minutes.
	I0924 00:34:17.260531   44220 command_runner.go:130] > # stream_tls_cert = ""
	I0924 00:34:17.260543   44220 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0924 00:34:17.260554   44220 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0924 00:34:17.260564   44220 command_runner.go:130] > # stream_tls_key = ""
	I0924 00:34:17.260593   44220 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0924 00:34:17.260608   44220 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0924 00:34:17.260628   44220 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0924 00:34:17.260638   44220 command_runner.go:130] > # stream_tls_ca = ""
	I0924 00:34:17.260649   44220 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 00:34:17.260659   44220 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0924 00:34:17.260670   44220 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 00:34:17.260680   44220 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0924 00:34:17.260690   44220 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0924 00:34:17.260702   44220 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0924 00:34:17.260714   44220 command_runner.go:130] > [crio.runtime]
	I0924 00:34:17.260724   44220 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0924 00:34:17.260735   44220 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0924 00:34:17.260743   44220 command_runner.go:130] > # "nofile=1024:2048"
	I0924 00:34:17.260755   44220 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0924 00:34:17.260765   44220 command_runner.go:130] > # default_ulimits = [
	I0924 00:34:17.260770   44220 command_runner.go:130] > # ]
	I0924 00:34:17.260783   44220 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0924 00:34:17.260791   44220 command_runner.go:130] > # no_pivot = false
	I0924 00:34:17.260804   44220 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0924 00:34:17.260816   44220 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0924 00:34:17.260823   44220 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0924 00:34:17.260837   44220 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0924 00:34:17.260849   44220 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0924 00:34:17.260861   44220 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 00:34:17.260878   44220 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0924 00:34:17.260888   44220 command_runner.go:130] > # Cgroup setting for conmon
	I0924 00:34:17.260898   44220 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0924 00:34:17.260909   44220 command_runner.go:130] > conmon_cgroup = "pod"
	I0924 00:34:17.260918   44220 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0924 00:34:17.260929   44220 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0924 00:34:17.260939   44220 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 00:34:17.260948   44220 command_runner.go:130] > conmon_env = [
	I0924 00:34:17.260957   44220 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 00:34:17.260965   44220 command_runner.go:130] > ]
	I0924 00:34:17.260974   44220 command_runner.go:130] > # Additional environment variables to set for all the
	I0924 00:34:17.260984   44220 command_runner.go:130] > # containers. These are overridden if set in the
	I0924 00:34:17.260998   44220 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0924 00:34:17.261008   44220 command_runner.go:130] > # default_env = [
	I0924 00:34:17.261014   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261023   44220 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0924 00:34:17.261034   44220 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0924 00:34:17.261044   44220 command_runner.go:130] > # selinux = false
	I0924 00:34:17.261053   44220 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0924 00:34:17.261065   44220 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0924 00:34:17.261077   44220 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0924 00:34:17.261088   44220 command_runner.go:130] > # seccomp_profile = ""
	I0924 00:34:17.261099   44220 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0924 00:34:17.261108   44220 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0924 00:34:17.261120   44220 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0924 00:34:17.261130   44220 command_runner.go:130] > # which might increase security.
	I0924 00:34:17.261137   44220 command_runner.go:130] > # This option is currently deprecated,
	I0924 00:34:17.261148   44220 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0924 00:34:17.261162   44220 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0924 00:34:17.261176   44220 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0924 00:34:17.261186   44220 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0924 00:34:17.261199   44220 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0924 00:34:17.261212   44220 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0924 00:34:17.261240   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.261255   44220 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0924 00:34:17.261267   44220 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0924 00:34:17.261275   44220 command_runner.go:130] > # the cgroup blockio controller.
	I0924 00:34:17.261282   44220 command_runner.go:130] > # blockio_config_file = ""
	I0924 00:34:17.261296   44220 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0924 00:34:17.261306   44220 command_runner.go:130] > # blockio parameters.
	I0924 00:34:17.261313   44220 command_runner.go:130] > # blockio_reload = false
	I0924 00:34:17.261324   44220 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0924 00:34:17.261332   44220 command_runner.go:130] > # irqbalance daemon.
	I0924 00:34:17.261340   44220 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0924 00:34:17.261353   44220 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0924 00:34:17.261365   44220 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0924 00:34:17.261378   44220 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0924 00:34:17.261387   44220 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0924 00:34:17.261400   44220 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0924 00:34:17.261408   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.261417   44220 command_runner.go:130] > # rdt_config_file = ""
	I0924 00:34:17.261426   44220 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0924 00:34:17.261435   44220 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0924 00:34:17.261507   44220 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0924 00:34:17.261525   44220 command_runner.go:130] > # separate_pull_cgroup = ""
	I0924 00:34:17.261535   44220 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0924 00:34:17.261547   44220 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0924 00:34:17.261556   44220 command_runner.go:130] > # will be added.
	I0924 00:34:17.261564   44220 command_runner.go:130] > # default_capabilities = [
	I0924 00:34:17.261578   44220 command_runner.go:130] > # 	"CHOWN",
	I0924 00:34:17.261585   44220 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0924 00:34:17.261595   44220 command_runner.go:130] > # 	"FSETID",
	I0924 00:34:17.261601   44220 command_runner.go:130] > # 	"FOWNER",
	I0924 00:34:17.261610   44220 command_runner.go:130] > # 	"SETGID",
	I0924 00:34:17.261615   44220 command_runner.go:130] > # 	"SETUID",
	I0924 00:34:17.261621   44220 command_runner.go:130] > # 	"SETPCAP",
	I0924 00:34:17.261629   44220 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0924 00:34:17.261638   44220 command_runner.go:130] > # 	"KILL",
	I0924 00:34:17.261643   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261655   44220 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0924 00:34:17.261668   44220 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0924 00:34:17.261678   44220 command_runner.go:130] > # add_inheritable_capabilities = false
	I0924 00:34:17.261688   44220 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0924 00:34:17.261699   44220 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 00:34:17.261708   44220 command_runner.go:130] > default_sysctls = [
	I0924 00:34:17.261719   44220 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0924 00:34:17.261727   44220 command_runner.go:130] > ]
	I0924 00:34:17.261734   44220 command_runner.go:130] > # List of devices on the host that a
	I0924 00:34:17.261746   44220 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0924 00:34:17.261753   44220 command_runner.go:130] > # allowed_devices = [
	I0924 00:34:17.261762   44220 command_runner.go:130] > # 	"/dev/fuse",
	I0924 00:34:17.261767   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261776   44220 command_runner.go:130] > # List of additional devices. specified as
	I0924 00:34:17.261787   44220 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0924 00:34:17.261799   44220 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0924 00:34:17.261810   44220 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 00:34:17.261820   44220 command_runner.go:130] > # additional_devices = [
	I0924 00:34:17.261825   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261838   44220 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0924 00:34:17.261846   44220 command_runner.go:130] > # cdi_spec_dirs = [
	I0924 00:34:17.261853   44220 command_runner.go:130] > # 	"/etc/cdi",
	I0924 00:34:17.261859   44220 command_runner.go:130] > # 	"/var/run/cdi",
	I0924 00:34:17.261868   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261877   44220 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0924 00:34:17.261890   44220 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0924 00:34:17.261899   44220 command_runner.go:130] > # Defaults to false.
	I0924 00:34:17.261907   44220 command_runner.go:130] > # device_ownership_from_security_context = false
	I0924 00:34:17.261920   44220 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0924 00:34:17.261932   44220 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0924 00:34:17.261942   44220 command_runner.go:130] > # hooks_dir = [
	I0924 00:34:17.261950   44220 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0924 00:34:17.261958   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261969   44220 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0924 00:34:17.261982   44220 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0924 00:34:17.261993   44220 command_runner.go:130] > # its default mounts from the following two files:
	I0924 00:34:17.261999   44220 command_runner.go:130] > #
	I0924 00:34:17.262011   44220 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0924 00:34:17.262023   44220 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0924 00:34:17.262035   44220 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0924 00:34:17.262042   44220 command_runner.go:130] > #
	I0924 00:34:17.262051   44220 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0924 00:34:17.262062   44220 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0924 00:34:17.262074   44220 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0924 00:34:17.262085   44220 command_runner.go:130] > #      only add mounts it finds in this file.
	I0924 00:34:17.262093   44220 command_runner.go:130] > #
	I0924 00:34:17.262099   44220 command_runner.go:130] > # default_mounts_file = ""
	I0924 00:34:17.262109   44220 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0924 00:34:17.262129   44220 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0924 00:34:17.262137   44220 command_runner.go:130] > pids_limit = 1024
	I0924 00:34:17.262145   44220 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0924 00:34:17.262158   44220 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0924 00:34:17.262171   44220 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0924 00:34:17.262187   44220 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0924 00:34:17.262197   44220 command_runner.go:130] > # log_size_max = -1
	I0924 00:34:17.262210   44220 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0924 00:34:17.262220   44220 command_runner.go:130] > # log_to_journald = false
	I0924 00:34:17.262232   44220 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0924 00:34:17.262244   44220 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0924 00:34:17.262255   44220 command_runner.go:130] > # Path to directory for container attach sockets.
	I0924 00:34:17.262266   44220 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0924 00:34:17.262274   44220 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0924 00:34:17.262284   44220 command_runner.go:130] > # bind_mount_prefix = ""
	I0924 00:34:17.262297   44220 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0924 00:34:17.262306   44220 command_runner.go:130] > # read_only = false
	I0924 00:34:17.262318   44220 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0924 00:34:17.262330   44220 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0924 00:34:17.262340   44220 command_runner.go:130] > # live configuration reload.
	I0924 00:34:17.262351   44220 command_runner.go:130] > # log_level = "info"
	I0924 00:34:17.262363   44220 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0924 00:34:17.262374   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.262384   44220 command_runner.go:130] > # log_filter = ""
	I0924 00:34:17.262396   44220 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0924 00:34:17.262409   44220 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0924 00:34:17.262419   44220 command_runner.go:130] > # separated by comma.
	I0924 00:34:17.262433   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262443   44220 command_runner.go:130] > # uid_mappings = ""
	I0924 00:34:17.262458   44220 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0924 00:34:17.262471   44220 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0924 00:34:17.262481   44220 command_runner.go:130] > # separated by comma.
	I0924 00:34:17.262496   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262505   44220 command_runner.go:130] > # gid_mappings = ""
	I0924 00:34:17.262517   44220 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0924 00:34:17.262530   44220 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 00:34:17.262548   44220 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 00:34:17.262563   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262578   44220 command_runner.go:130] > # minimum_mappable_uid = -1
	I0924 00:34:17.262589   44220 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0924 00:34:17.262601   44220 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 00:34:17.262614   44220 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 00:34:17.262628   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262638   44220 command_runner.go:130] > # minimum_mappable_gid = -1
	I0924 00:34:17.262649   44220 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0924 00:34:17.262662   44220 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0924 00:34:17.262673   44220 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0924 00:34:17.262681   44220 command_runner.go:130] > # ctr_stop_timeout = 30
	I0924 00:34:17.262691   44220 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0924 00:34:17.262701   44220 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0924 00:34:17.262710   44220 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0924 00:34:17.262721   44220 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0924 00:34:17.262730   44220 command_runner.go:130] > drop_infra_ctr = false
	I0924 00:34:17.262742   44220 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0924 00:34:17.262753   44220 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0924 00:34:17.262767   44220 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0924 00:34:17.262777   44220 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0924 00:34:17.262791   44220 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0924 00:34:17.262803   44220 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0924 00:34:17.262815   44220 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0924 00:34:17.262826   44220 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0924 00:34:17.262834   44220 command_runner.go:130] > # shared_cpuset = ""
	I0924 00:34:17.262842   44220 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0924 00:34:17.262852   44220 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0924 00:34:17.262861   44220 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0924 00:34:17.262874   44220 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0924 00:34:17.262884   44220 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0924 00:34:17.262895   44220 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0924 00:34:17.262908   44220 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0924 00:34:17.262917   44220 command_runner.go:130] > # enable_criu_support = false
	I0924 00:34:17.262928   44220 command_runner.go:130] > # Enable/disable the generation of the container,
	I0924 00:34:17.262941   44220 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0924 00:34:17.262951   44220 command_runner.go:130] > # enable_pod_events = false
	I0924 00:34:17.262962   44220 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 00:34:17.262974   44220 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 00:34:17.262983   44220 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0924 00:34:17.262992   44220 command_runner.go:130] > # default_runtime = "runc"
	I0924 00:34:17.263002   44220 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0924 00:34:17.263015   44220 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0924 00:34:17.263031   44220 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0924 00:34:17.263043   44220 command_runner.go:130] > # creation as a file is not desired either.
	I0924 00:34:17.263059   44220 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0924 00:34:17.263069   44220 command_runner.go:130] > # the hostname is being managed dynamically.
	I0924 00:34:17.263078   44220 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0924 00:34:17.263085   44220 command_runner.go:130] > # ]
	I0924 00:34:17.263095   44220 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0924 00:34:17.263107   44220 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0924 00:34:17.263118   44220 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0924 00:34:17.263128   44220 command_runner.go:130] > # Each entry in the table should follow the format:
	I0924 00:34:17.263135   44220 command_runner.go:130] > #
	I0924 00:34:17.263142   44220 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0924 00:34:17.263152   44220 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0924 00:34:17.263189   44220 command_runner.go:130] > # runtime_type = "oci"
	I0924 00:34:17.263198   44220 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0924 00:34:17.263205   44220 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0924 00:34:17.263215   44220 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0924 00:34:17.263224   44220 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0924 00:34:17.263232   44220 command_runner.go:130] > # monitor_env = []
	I0924 00:34:17.263242   44220 command_runner.go:130] > # privileged_without_host_devices = false
	I0924 00:34:17.263252   44220 command_runner.go:130] > # allowed_annotations = []
	I0924 00:34:17.263260   44220 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0924 00:34:17.263269   44220 command_runner.go:130] > # Where:
	I0924 00:34:17.263276   44220 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0924 00:34:17.263288   44220 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0924 00:34:17.263297   44220 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0924 00:34:17.263306   44220 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0924 00:34:17.263311   44220 command_runner.go:130] > #   in $PATH.
	I0924 00:34:17.263320   44220 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0924 00:34:17.263327   44220 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0924 00:34:17.263342   44220 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0924 00:34:17.263347   44220 command_runner.go:130] > #   state.
	I0924 00:34:17.263356   44220 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0924 00:34:17.263364   44220 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0924 00:34:17.263375   44220 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0924 00:34:17.263385   44220 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0924 00:34:17.263395   44220 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0924 00:34:17.263404   44220 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0924 00:34:17.263410   44220 command_runner.go:130] > #   The currently recognized values are:
	I0924 00:34:17.263419   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0924 00:34:17.263429   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0924 00:34:17.263437   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0924 00:34:17.263446   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0924 00:34:17.263456   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0924 00:34:17.263469   44220 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0924 00:34:17.263482   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0924 00:34:17.263495   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0924 00:34:17.263506   44220 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0924 00:34:17.263517   44220 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0924 00:34:17.263526   44220 command_runner.go:130] > #   deprecated option "conmon".
	I0924 00:34:17.263535   44220 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0924 00:34:17.263545   44220 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0924 00:34:17.263558   44220 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0924 00:34:17.263575   44220 command_runner.go:130] > #   should be moved to the container's cgroup
	I0924 00:34:17.263588   44220 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0924 00:34:17.263598   44220 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0924 00:34:17.263609   44220 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0924 00:34:17.263619   44220 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0924 00:34:17.263627   44220 command_runner.go:130] > #
	I0924 00:34:17.263637   44220 command_runner.go:130] > # Using the seccomp notifier feature:
	I0924 00:34:17.263646   44220 command_runner.go:130] > #
	I0924 00:34:17.263659   44220 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0924 00:34:17.263672   44220 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0924 00:34:17.263680   44220 command_runner.go:130] > #
	I0924 00:34:17.263692   44220 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0924 00:34:17.263703   44220 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0924 00:34:17.263711   44220 command_runner.go:130] > #
	I0924 00:34:17.263720   44220 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0924 00:34:17.263729   44220 command_runner.go:130] > # feature.
	I0924 00:34:17.263737   44220 command_runner.go:130] > #
	I0924 00:34:17.263749   44220 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0924 00:34:17.263761   44220 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0924 00:34:17.263772   44220 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0924 00:34:17.263784   44220 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0924 00:34:17.263795   44220 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0924 00:34:17.263802   44220 command_runner.go:130] > #
	I0924 00:34:17.263812   44220 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0924 00:34:17.263823   44220 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0924 00:34:17.263829   44220 command_runner.go:130] > #
	I0924 00:34:17.263840   44220 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0924 00:34:17.263850   44220 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0924 00:34:17.263857   44220 command_runner.go:130] > #
	I0924 00:34:17.263866   44220 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0924 00:34:17.263877   44220 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0924 00:34:17.263885   44220 command_runner.go:130] > # limitation.
	I0924 00:34:17.263895   44220 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0924 00:34:17.263904   44220 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0924 00:34:17.263909   44220 command_runner.go:130] > runtime_type = "oci"
	I0924 00:34:17.263915   44220 command_runner.go:130] > runtime_root = "/run/runc"
	I0924 00:34:17.263921   44220 command_runner.go:130] > runtime_config_path = ""
	I0924 00:34:17.263928   44220 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0924 00:34:17.263938   44220 command_runner.go:130] > monitor_cgroup = "pod"
	I0924 00:34:17.263947   44220 command_runner.go:130] > monitor_exec_cgroup = ""
	I0924 00:34:17.263956   44220 command_runner.go:130] > monitor_env = [
	I0924 00:34:17.263966   44220 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 00:34:17.263973   44220 command_runner.go:130] > ]
	I0924 00:34:17.263979   44220 command_runner.go:130] > privileged_without_host_devices = false
	I0924 00:34:17.263987   44220 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0924 00:34:17.263998   44220 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0924 00:34:17.264009   44220 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0924 00:34:17.264023   44220 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0924 00:34:17.264038   44220 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0924 00:34:17.264048   44220 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0924 00:34:17.264067   44220 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0924 00:34:17.264082   44220 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0924 00:34:17.264093   44220 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0924 00:34:17.264107   44220 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0924 00:34:17.264116   44220 command_runner.go:130] > # Example:
	I0924 00:34:17.264124   44220 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0924 00:34:17.264134   44220 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0924 00:34:17.264141   44220 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0924 00:34:17.264152   44220 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0924 00:34:17.264160   44220 command_runner.go:130] > # cpuset = 0
	I0924 00:34:17.264170   44220 command_runner.go:130] > # cpushares = "0-1"
	I0924 00:34:17.264177   44220 command_runner.go:130] > # Where:
	I0924 00:34:17.264182   44220 command_runner.go:130] > # The workload name is workload-type.
	I0924 00:34:17.264190   44220 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0924 00:34:17.264198   44220 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0924 00:34:17.264206   44220 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0924 00:34:17.264214   44220 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0924 00:34:17.264221   44220 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0924 00:34:17.264230   44220 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0924 00:34:17.264239   44220 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0924 00:34:17.264246   44220 command_runner.go:130] > # Default value is set to true
	I0924 00:34:17.264251   44220 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0924 00:34:17.264259   44220 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0924 00:34:17.264265   44220 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0924 00:34:17.264270   44220 command_runner.go:130] > # Default value is set to 'false'
	I0924 00:34:17.264274   44220 command_runner.go:130] > # disable_hostport_mapping = false
	I0924 00:34:17.264280   44220 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0924 00:34:17.264283   44220 command_runner.go:130] > #
	I0924 00:34:17.264288   44220 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0924 00:34:17.264294   44220 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0924 00:34:17.264302   44220 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0924 00:34:17.264313   44220 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0924 00:34:17.264321   44220 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0924 00:34:17.264325   44220 command_runner.go:130] > [crio.image]
	I0924 00:34:17.264345   44220 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0924 00:34:17.264352   44220 command_runner.go:130] > # default_transport = "docker://"
	I0924 00:34:17.264365   44220 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0924 00:34:17.264374   44220 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0924 00:34:17.264381   44220 command_runner.go:130] > # global_auth_file = ""
	I0924 00:34:17.264389   44220 command_runner.go:130] > # The image used to instantiate infra containers.
	I0924 00:34:17.264398   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.264405   44220 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0924 00:34:17.264411   44220 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0924 00:34:17.264416   44220 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0924 00:34:17.264420   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.264424   44220 command_runner.go:130] > # pause_image_auth_file = ""
	I0924 00:34:17.264429   44220 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0924 00:34:17.264434   44220 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0924 00:34:17.264439   44220 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0924 00:34:17.264444   44220 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0924 00:34:17.264449   44220 command_runner.go:130] > # pause_command = "/pause"
	I0924 00:34:17.264455   44220 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0924 00:34:17.264460   44220 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0924 00:34:17.264465   44220 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0924 00:34:17.264470   44220 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0924 00:34:17.264476   44220 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0924 00:34:17.264481   44220 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0924 00:34:17.264484   44220 command_runner.go:130] > # pinned_images = [
	I0924 00:34:17.264487   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264493   44220 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0924 00:34:17.264498   44220 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0924 00:34:17.264503   44220 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0924 00:34:17.264509   44220 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0924 00:34:17.264514   44220 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0924 00:34:17.264521   44220 command_runner.go:130] > # signature_policy = ""
	I0924 00:34:17.264526   44220 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0924 00:34:17.264532   44220 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0924 00:34:17.264540   44220 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0924 00:34:17.264549   44220 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0924 00:34:17.264555   44220 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0924 00:34:17.264561   44220 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0924 00:34:17.264577   44220 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0924 00:34:17.264586   44220 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0924 00:34:17.264592   44220 command_runner.go:130] > # changing them here.
	I0924 00:34:17.264596   44220 command_runner.go:130] > # insecure_registries = [
	I0924 00:34:17.264601   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264608   44220 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0924 00:34:17.264615   44220 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0924 00:34:17.264619   44220 command_runner.go:130] > # image_volumes = "mkdir"
	I0924 00:34:17.264624   44220 command_runner.go:130] > # Temporary directory to use for storing big files
	I0924 00:34:17.264629   44220 command_runner.go:130] > # big_files_temporary_dir = ""
	I0924 00:34:17.264637   44220 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0924 00:34:17.264643   44220 command_runner.go:130] > # CNI plugins.
	I0924 00:34:17.264647   44220 command_runner.go:130] > [crio.network]
	I0924 00:34:17.264655   44220 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0924 00:34:17.264662   44220 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0924 00:34:17.264667   44220 command_runner.go:130] > # cni_default_network = ""
	I0924 00:34:17.264674   44220 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0924 00:34:17.264681   44220 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0924 00:34:17.264687   44220 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0924 00:34:17.264693   44220 command_runner.go:130] > # plugin_dirs = [
	I0924 00:34:17.264697   44220 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0924 00:34:17.264702   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264708   44220 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0924 00:34:17.264714   44220 command_runner.go:130] > [crio.metrics]
	I0924 00:34:17.264718   44220 command_runner.go:130] > # Globally enable or disable metrics support.
	I0924 00:34:17.264725   44220 command_runner.go:130] > enable_metrics = true
	I0924 00:34:17.264729   44220 command_runner.go:130] > # Specify enabled metrics collectors.
	I0924 00:34:17.264736   44220 command_runner.go:130] > # Per default all metrics are enabled.
	I0924 00:34:17.264742   44220 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0924 00:34:17.264749   44220 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0924 00:34:17.264755   44220 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0924 00:34:17.264761   44220 command_runner.go:130] > # metrics_collectors = [
	I0924 00:34:17.264765   44220 command_runner.go:130] > # 	"operations",
	I0924 00:34:17.264771   44220 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0924 00:34:17.264775   44220 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0924 00:34:17.264781   44220 command_runner.go:130] > # 	"operations_errors",
	I0924 00:34:17.264786   44220 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0924 00:34:17.264792   44220 command_runner.go:130] > # 	"image_pulls_by_name",
	I0924 00:34:17.264796   44220 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0924 00:34:17.264802   44220 command_runner.go:130] > # 	"image_pulls_failures",
	I0924 00:34:17.264806   44220 command_runner.go:130] > # 	"image_pulls_successes",
	I0924 00:34:17.264816   44220 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0924 00:34:17.264822   44220 command_runner.go:130] > # 	"image_layer_reuse",
	I0924 00:34:17.264827   44220 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0924 00:34:17.264835   44220 command_runner.go:130] > # 	"containers_oom_total",
	I0924 00:34:17.264841   44220 command_runner.go:130] > # 	"containers_oom",
	I0924 00:34:17.264845   44220 command_runner.go:130] > # 	"processes_defunct",
	I0924 00:34:17.264851   44220 command_runner.go:130] > # 	"operations_total",
	I0924 00:34:17.264855   44220 command_runner.go:130] > # 	"operations_latency_seconds",
	I0924 00:34:17.264861   44220 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0924 00:34:17.264866   44220 command_runner.go:130] > # 	"operations_errors_total",
	I0924 00:34:17.264872   44220 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0924 00:34:17.264876   44220 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0924 00:34:17.264880   44220 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0924 00:34:17.264885   44220 command_runner.go:130] > # 	"image_pulls_success_total",
	I0924 00:34:17.264890   44220 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0924 00:34:17.264896   44220 command_runner.go:130] > # 	"containers_oom_count_total",
	I0924 00:34:17.264901   44220 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0924 00:34:17.264907   44220 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0924 00:34:17.264911   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264918   44220 command_runner.go:130] > # The port on which the metrics server will listen.
	I0924 00:34:17.264923   44220 command_runner.go:130] > # metrics_port = 9090
	I0924 00:34:17.264930   44220 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0924 00:34:17.264934   44220 command_runner.go:130] > # metrics_socket = ""
	I0924 00:34:17.264941   44220 command_runner.go:130] > # The certificate for the secure metrics server.
	I0924 00:34:17.264946   44220 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0924 00:34:17.264954   44220 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0924 00:34:17.264961   44220 command_runner.go:130] > # certificate on any modification event.
	I0924 00:34:17.264965   44220 command_runner.go:130] > # metrics_cert = ""
	I0924 00:34:17.264972   44220 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0924 00:34:17.264976   44220 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0924 00:34:17.264982   44220 command_runner.go:130] > # metrics_key = ""
	I0924 00:34:17.264988   44220 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0924 00:34:17.264994   44220 command_runner.go:130] > [crio.tracing]
	I0924 00:34:17.264999   44220 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0924 00:34:17.265005   44220 command_runner.go:130] > # enable_tracing = false
	I0924 00:34:17.265011   44220 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0924 00:34:17.265017   44220 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0924 00:34:17.265025   44220 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0924 00:34:17.265035   44220 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0924 00:34:17.265041   44220 command_runner.go:130] > # CRI-O NRI configuration.
	I0924 00:34:17.265049   44220 command_runner.go:130] > [crio.nri]
	I0924 00:34:17.265059   44220 command_runner.go:130] > # Globally enable or disable NRI.
	I0924 00:34:17.265067   44220 command_runner.go:130] > # enable_nri = false
	I0924 00:34:17.265076   44220 command_runner.go:130] > # NRI socket to listen on.
	I0924 00:34:17.265085   44220 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0924 00:34:17.265095   44220 command_runner.go:130] > # NRI plugin directory to use.
	I0924 00:34:17.265102   44220 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0924 00:34:17.265110   44220 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0924 00:34:17.265117   44220 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0924 00:34:17.265122   44220 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0924 00:34:17.265129   44220 command_runner.go:130] > # nri_disable_connections = false
	I0924 00:34:17.265136   44220 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0924 00:34:17.265142   44220 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0924 00:34:17.265147   44220 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0924 00:34:17.265154   44220 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0924 00:34:17.265159   44220 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0924 00:34:17.265165   44220 command_runner.go:130] > [crio.stats]
	I0924 00:34:17.265171   44220 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0924 00:34:17.265179   44220 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0924 00:34:17.265185   44220 command_runner.go:130] > # stats_collection_period = 0
	I0924 00:34:17.265219   44220 command_runner.go:130] ! time="2024-09-24 00:34:17.225563758Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0924 00:34:17.265232   44220 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0924 00:34:17.265305   44220 cni.go:84] Creating CNI manager for ""
	I0924 00:34:17.265315   44220 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 00:34:17.265328   44220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:34:17.265353   44220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.199 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-246036 NodeName:multinode-246036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 00:34:17.265469   44220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-246036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:34:17.265527   44220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:34:17.275904   44220 command_runner.go:130] > kubeadm
	I0924 00:34:17.275927   44220 command_runner.go:130] > kubectl
	I0924 00:34:17.275931   44220 command_runner.go:130] > kubelet
	I0924 00:34:17.275951   44220 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:34:17.275996   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 00:34:17.285302   44220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0924 00:34:17.302037   44220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:34:17.317710   44220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0924 00:34:17.333734   44220 ssh_runner.go:195] Run: grep 192.168.39.199	control-plane.minikube.internal$ /etc/hosts
	I0924 00:34:17.337314   44220 command_runner.go:130] > 192.168.39.199	control-plane.minikube.internal
	I0924 00:34:17.337383   44220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:34:17.475840   44220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:34:17.490395   44220 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036 for IP: 192.168.39.199
	I0924 00:34:17.490427   44220 certs.go:194] generating shared ca certs ...
	I0924 00:34:17.490447   44220 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:34:17.490631   44220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:34:17.490688   44220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:34:17.490707   44220 certs.go:256] generating profile certs ...
	I0924 00:34:17.490804   44220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/client.key
	I0924 00:34:17.490859   44220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.key.a48aa622
	I0924 00:34:17.490892   44220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.key
	I0924 00:34:17.490905   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:34:17.490929   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:34:17.490941   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:34:17.490953   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:34:17.490965   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:34:17.490978   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:34:17.490991   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:34:17.491004   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:34:17.491065   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:34:17.491106   44220 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:34:17.491120   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:34:17.491152   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:34:17.491175   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:34:17.491198   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:34:17.491239   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:34:17.491265   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.491279   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.491292   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.491870   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:34:17.514745   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:34:17.537642   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:34:17.561162   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:34:17.585079   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 00:34:17.609009   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 00:34:17.632030   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:34:17.654517   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:34:17.677264   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:34:17.699225   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:34:17.721521   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:34:17.745402   44220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:34:17.761081   44220 ssh_runner.go:195] Run: openssl version
	I0924 00:34:17.766972   44220 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0924 00:34:17.767138   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:34:17.777788   44220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.781957   44220 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.782005   44220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.782049   44220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.786916   44220 command_runner.go:130] > b5213941
	I0924 00:34:17.787303   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:34:17.796321   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:34:17.806842   44220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.810983   44220 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.811012   44220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.811048   44220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.816876   44220 command_runner.go:130] > 51391683
	I0924 00:34:17.816941   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:34:17.827775   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:34:17.839636   44220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.843714   44220 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.843863   44220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.843918   44220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.849262   44220 command_runner.go:130] > 3ec20f2e
	I0924 00:34:17.849332   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:34:17.858254   44220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:34:17.862102   44220 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:34:17.862128   44220 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0924 00:34:17.862135   44220 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0924 00:34:17.862141   44220 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 00:34:17.862147   44220 command_runner.go:130] > Access: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862153   44220 command_runner.go:130] > Modify: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862161   44220 command_runner.go:130] > Change: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862169   44220 command_runner.go:130] >  Birth: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862252   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 00:34:17.867237   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.867370   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 00:34:17.872364   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.872421   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 00:34:17.877954   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.878058   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 00:34:17.883445   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.883580   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 00:34:17.888656   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.888710   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 00:34:17.893628   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.893798   44220 kubeadm.go:392] StartCluster: {Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:34:17.893904   44220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 00:34:17.893961   44220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:34:17.934425   44220 command_runner.go:130] > bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec
	I0924 00:34:17.934473   44220 command_runner.go:130] > 5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a
	I0924 00:34:17.934481   44220 command_runner.go:130] > 5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a
	I0924 00:34:17.934489   44220 command_runner.go:130] > 4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9
	I0924 00:34:17.934496   44220 command_runner.go:130] > a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb
	I0924 00:34:17.934503   44220 command_runner.go:130] > f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0
	I0924 00:34:17.934512   44220 command_runner.go:130] > b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a
	I0924 00:34:17.934521   44220 command_runner.go:130] > 33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237
	I0924 00:34:17.934546   44220 cri.go:89] found id: "bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec"
	I0924 00:34:17.934559   44220 cri.go:89] found id: "5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a"
	I0924 00:34:17.934566   44220 cri.go:89] found id: "5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a"
	I0924 00:34:17.934572   44220 cri.go:89] found id: "4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9"
	I0924 00:34:17.934579   44220 cri.go:89] found id: "a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb"
	I0924 00:34:17.934586   44220 cri.go:89] found id: "f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0"
	I0924 00:34:17.934589   44220 cri.go:89] found id: "b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a"
	I0924 00:34:17.934595   44220 cri.go:89] found id: "33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237"
	I0924 00:34:17.934598   44220 cri.go:89] found id: ""
	I0924 00:34:17.934640   44220 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.208488982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138164208462800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab349b93-4d72-42b3-9a5c-b3c8de447c78 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.209126543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=129bf5d9-08f7-4085-9d84-70ddea2731e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.209194969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=129bf5d9-08f7-4085-9d84-70ddea2731e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.209549138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=129bf5d9-08f7-4085-9d84-70ddea2731e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.251732795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ce880c1-b2cd-4272-8470-3b532d903c43 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.251822668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ce880c1-b2cd-4272-8470-3b532d903c43 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.252947902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=224d0a51-f936-4270-a0e2-e74eb7866675 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.253383951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138164253359243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=224d0a51-f936-4270-a0e2-e74eb7866675 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.253895571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9083972d-f8c6-4315-a0d1-2ae515f60cc2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.253950978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9083972d-f8c6-4315-a0d1-2ae515f60cc2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.254310985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9083972d-f8c6-4315-a0d1-2ae515f60cc2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.294221774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fc554f9-a2e5-48d7-877e-c019cd009e6d name=/runtime.v1.RuntimeService/Version
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.294296388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fc554f9-a2e5-48d7-877e-c019cd009e6d name=/runtime.v1.RuntimeService/Version
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.301025939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c25d1323-73ce-4542-8b16-ad46d6ca507f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.301432033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138164301406379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c25d1323-73ce-4542-8b16-ad46d6ca507f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.302085966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=019a166d-aec0-4129-90f3-358366dc778e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.302157683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=019a166d-aec0-4129-90f3-358366dc778e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.302532453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=019a166d-aec0-4129-90f3-358366dc778e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.341961991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c48faca4-a648-4bd6-8d0b-8c28ab5b4876 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.342052060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c48faca4-a648-4bd6-8d0b-8c28ab5b4876 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.343321851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a3933a0-be01-40d3-8c39-5975e4537840 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.343856110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138164343827929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a3933a0-be01-40d3-8c39-5975e4537840 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.344403879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cbb0ee1-2e53-4f0c-9fbc-2f935fb20dc9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.344470964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cbb0ee1-2e53-4f0c-9fbc-2f935fb20dc9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:36:04 multinode-246036 crio[2766]: time="2024-09-24 00:36:04.344900498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cbb0ee1-2e53-4f0c-9fbc-2f935fb20dc9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9beea755724ad       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   12914b88b7fef       busybox-7dff88458-b5dpk
	bf020b0b565a2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   4caa1af881aab       kindnet-2jt2x
	c2b93d287ad3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   0a3e21cfa3565       coredns-7c65d6cfc9-69257
	514051851b1eb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   6f97cd1430faa       kube-proxy-4ncsm
	efe4ee672aebb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   bc75ada1f9eb0       storage-provisioner
	b45076d45479a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   d545fb3515575       etcd-multinode-246036
	01fd569a601fa       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   551e7742dc9b6       kube-apiserver-multinode-246036
	586488001f58f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   ee56de5939931       kube-scheduler-multinode-246036
	b4a4ea183a26a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   93394c6db5434       kube-controller-manager-multinode-246036
	598080e596aa0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   b3b915a71dc30       busybox-7dff88458-b5dpk
	bc559fe548fce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago        Exited              coredns                   0                   64fa2c553f79a       coredns-7c65d6cfc9-69257
	5058772d19736       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   d9af3918ad283       storage-provisioner
	5b8abe628fa9e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   49b89dbc9cb7e       kindnet-2jt2x
	4a80eb915d724       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   6054821997e6e       kube-proxy-4ncsm
	a6003f3f1b636       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   5bafd4e2bf711       kube-scheduler-multinode-246036
	f1dea2a49f50c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   cd3ebc37a81fc       kube-controller-manager-multinode-246036
	b98807a030c36       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   bf6d66ecdda0f       etcd-multinode-246036
	33b18f596b4ef       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   8b9d12c462927       kube-apiserver-multinode-246036
	
	
	==> coredns [bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec] <==
	[INFO] 10.244.0.3:41552 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002235218s
	[INFO] 10.244.0.3:40459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009399s
	[INFO] 10.244.0.3:55802 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081863s
	[INFO] 10.244.0.3:47094 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450217s
	[INFO] 10.244.0.3:59499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074715s
	[INFO] 10.244.0.3:51474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068892s
	[INFO] 10.244.0.3:58549 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196573s
	[INFO] 10.244.1.2:41654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169624s
	[INFO] 10.244.1.2:46021 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104158s
	[INFO] 10.244.1.2:33984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097059s
	[INFO] 10.244.1.2:53601 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145137s
	[INFO] 10.244.0.3:56408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000442975s
	[INFO] 10.244.0.3:51206 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105408s
	[INFO] 10.244.0.3:40493 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068583s
	[INFO] 10.244.0.3:38595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063891s
	[INFO] 10.244.1.2:50852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156383s
	[INFO] 10.244.1.2:44648 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00028132s
	[INFO] 10.244.1.2:42989 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017178s
	[INFO] 10.244.1.2:48496 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120554s
	[INFO] 10.244.0.3:39858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204882s
	[INFO] 10.244.0.3:49340 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000090419s
	[INFO] 10.244.0.3:34926 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078461s
	[INFO] 10.244.0.3:39068 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065707s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41605 - 29420 "HINFO IN 1477722195987132737.4247350318425163224. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007398848s
	
	
	==> describe nodes <==
	Name:               multinode-246036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-246036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=multinode-246036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_27_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:27:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-246036
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:35:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:27:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:27:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:27:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:28:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    multinode-246036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f008ce8a34347c893fac80674868796
	  System UUID:                4f008ce8-a343-47c8-93fa-c80674868796
	  Boot ID:                    5fb8e198-b346-48f3-91a6-24e72e61aa1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b5dpk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 coredns-7c65d6cfc9-69257                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m13s
	  kube-system                 etcd-multinode-246036                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m18s
	  kube-system                 kindnet-2jt2x                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m13s
	  kube-system                 kube-apiserver-multinode-246036             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-controller-manager-multinode-246036    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-proxy-4ncsm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-scheduler-multinode-246036             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m12s                kube-proxy       
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m18s                kubelet          Node multinode-246036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m18s                kubelet          Node multinode-246036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s                kubelet          Node multinode-246036 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m18s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m14s                node-controller  Node multinode-246036 event: Registered Node multinode-246036 in Controller
	  Normal  NodeReady                8m1s                 kubelet          Node multinode-246036 status is now: NodeReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-246036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-246036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-246036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                  node-controller  Node multinode-246036 event: Registered Node multinode-246036 in Controller
	
	
	Name:               multinode-246036-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-246036-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=multinode-246036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_35_03_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:35:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-246036-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:36:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:35:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    multinode-246036-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8e4e9b6aac945b0a0a1e5898db0422f
	  System UUID:                b8e4e9b6-aac9-45b0-a0a1-e5898db0422f
	  Boot ID:                    cda88509-cb60-4e5f-aa71-84ac16bff177
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c9kq6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-j9klb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m32s
	  kube-system                 kube-proxy-lwpzt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 7m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m32s (x2 over 7m33s)  kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s (x2 over 7m33s)  kubelet          Node multinode-246036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s (x2 over 7m33s)  kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m11s                  kubelet          Node multinode-246036-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  62s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x2 over 62s)      kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 62s)      kubelet          Node multinode-246036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 62s)      kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                    node-controller  Node multinode-246036-m02 event: Registered Node multinode-246036-m02 in Controller
	  Normal  NodeReady                42s                    kubelet          Node multinode-246036-m02 status is now: NodeReady
	
	
	Name:               multinode-246036-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-246036-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=multinode-246036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_35_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:35:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-246036-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:36:01 +0000   Tue, 24 Sep 2024 00:35:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:36:01 +0000   Tue, 24 Sep 2024 00:35:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:36:01 +0000   Tue, 24 Sep 2024 00:35:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:36:01 +0000   Tue, 24 Sep 2024 00:36:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    multinode-246036-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e280f52d76ea4fe19201889a565482b1
	  System UUID:                e280f52d-76ea-4fe1-9201-889a565482b1
	  Boot ID:                    9de9280d-5080-430c-9dca-0dfb04296ca8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ws9d8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m38s
	  kube-system                 kube-proxy-59frq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From           Message
	  ----    ------                   ----                   ----           -------
	  Normal  Starting                 5m44s                  kube-proxy     
	  Normal  Starting                 6m33s                  kube-proxy     
	  Normal  Starting                 18s                    kube-proxy     
	  Normal  NodeHasSufficientMemory  6m38s (x2 over 6m38s)  kubelet        Node multinode-246036-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x2 over 6m38s)  kubelet        Node multinode-246036-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x2 over 6m38s)  kubelet        Node multinode-246036-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m18s                  kubelet        Node multinode-246036-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet        Node multinode-246036-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet        Node multinode-246036-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet        Node multinode-246036-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m49s                  kubelet        Starting kubelet.
	  Normal  NodeReady                5m29s                  kubelet        Node multinode-246036-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     23s                    cidrAllocator  Node multinode-246036-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet        Node multinode-246036-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet        Node multinode-246036-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet        Node multinode-246036-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet        Node multinode-246036-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053107] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.163046] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.145219] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.269851] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.822410] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.480813] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.061509] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498670] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.083407] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.548525] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[  +1.000132] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 00:28] kauditd_printk_skb: 41 callbacks suppressed
	[ +51.195758] kauditd_printk_skb: 12 callbacks suppressed
	[Sep24 00:34] systemd-fstab-generator[2638]: Ignoring "noauto" option for root device
	[  +0.162179] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.163958] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +0.139283] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.269430] systemd-fstab-generator[2705]: Ignoring "noauto" option for root device
	[  +4.392540] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.078342] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.968583] systemd-fstab-generator[2972]: Ignoring "noauto" option for root device
	[  +5.652820] kauditd_printk_skb: 74 callbacks suppressed
	[ +13.833181] systemd-fstab-generator[3794]: Ignoring "noauto" option for root device
	[  +0.096573] kauditd_printk_skb: 34 callbacks suppressed
	[ +19.838501] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0] <==
	{"level":"info","ts":"2024-09-24T00:34:21.028532Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:34:21.032749Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:34:21.034930Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"adf16ee9d395f7b5","initial-advertise-peer-urls":["https://192.168.39.199:2380"],"listen-peer-urls":["https://192.168.39.199:2380"],"advertise-client-urls":["https://192.168.39.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T00:34:21.035009Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T00:34:22.410610Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-24T00:34:22.410774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T00:34:22.410840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 received MsgPreVoteResp from adf16ee9d395f7b5 at term 2"}
	{"level":"info","ts":"2024-09-24T00:34:22.410887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.410924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 received MsgVoteResp from adf16ee9d395f7b5 at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.410951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.410976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: adf16ee9d395f7b5 elected leader adf16ee9d395f7b5 at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.416038Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"adf16ee9d395f7b5","local-member-attributes":"{Name:multinode-246036 ClientURLs:[https://192.168.39.199:2379]}","request-path":"/0/members/adf16ee9d395f7b5/attributes","cluster-id":"beb078c6af941210","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T00:34:22.416065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:34:22.416298Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T00:34:22.416334Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T00:34:22.416083Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:34:22.417341Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:34:22.417376Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:34:22.418243Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.199:2379"}
	{"level":"info","ts":"2024-09-24T00:34:22.418990Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T00:35:07.355140Z","caller":"traceutil/trace.go:171","msg":"trace[1395870307] transaction","detail":"{read_only:false; response_revision:1025; number_of_response:1; }","duration":"194.280524ms","start":"2024-09-24T00:35:07.160484Z","end":"2024-09-24T00:35:07.354764Z","steps":["trace[1395870307] 'process raft request'  (duration: 194.106094ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:35:11.041014Z","caller":"traceutil/trace.go:171","msg":"trace[117307701] linearizableReadLoop","detail":"{readStateIndex:1132; appliedIndex:1131; }","duration":"108.556971ms","start":"2024-09-24T00:35:10.932443Z","end":"2024-09-24T00:35:11.041000Z","steps":["trace[117307701] 'read index received'  (duration: 108.387797ms)","trace[117307701] 'applied index is now lower than readState.Index'  (duration: 168.689µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:35:11.041229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.724982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m02\" ","response":"range_response_count:1 size:3119"}
	{"level":"info","ts":"2024-09-24T00:35:11.041298Z","caller":"traceutil/trace.go:171","msg":"trace[555372187] range","detail":"{range_begin:/registry/minions/multinode-246036-m02; range_end:; response_count:1; response_revision:1033; }","duration":"108.86359ms","start":"2024-09-24T00:35:10.932426Z","end":"2024-09-24T00:35:11.041290Z","steps":["trace[555372187] 'agreement among raft nodes before linearized reading'  (duration: 108.654954ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:35:11.041444Z","caller":"traceutil/trace.go:171","msg":"trace[1060207262] transaction","detail":"{read_only:false; response_revision:1033; number_of_response:1; }","duration":"132.086697ms","start":"2024-09-24T00:35:10.909345Z","end":"2024-09-24T00:35:11.041432Z","steps":["trace[1060207262] 'process raft request'  (duration: 131.526473ms)"],"step_count":1}
	
	
	==> etcd [b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a] <==
	{"level":"info","ts":"2024-09-24T00:28:41.340936Z","caller":"traceutil/trace.go:171","msg":"trace[643533056] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:494; }","duration":"144.879857ms","start":"2024-09-24T00:28:41.196046Z","end":"2024-09-24T00:28:41.340926Z","steps":["trace[643533056] 'read index received'  (duration: 8.883044ms)","trace[643533056] 'applied index is now lower than readState.Index'  (duration: 135.995852ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:28:41.341015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.959394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T00:28:41.341048Z","caller":"traceutil/trace.go:171","msg":"trace[170210422] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:476; }","duration":"145.000705ms","start":"2024-09-24T00:28:41.196042Z","end":"2024-09-24T00:28:41.341043Z","steps":["trace[170210422] 'agreement among raft nodes before linearized reading'  (duration: 144.939257ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:29:26.457500Z","caller":"traceutil/trace.go:171","msg":"trace[947301684] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:596; }","duration":"190.367914ms","start":"2024-09-24T00:29:26.267067Z","end":"2024-09-24T00:29:26.457435Z","steps":["trace[947301684] 'read index received'  (duration: 106.192873ms)","trace[947301684] 'applied index is now lower than readState.Index'  (duration: 84.173833ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T00:29:26.457655Z","caller":"traceutil/trace.go:171","msg":"trace[1669373303] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"207.152138ms","start":"2024-09-24T00:29:26.250496Z","end":"2024-09-24T00:29:26.457648Z","steps":["trace[1669373303] 'process raft request'  (duration: 122.801076ms)","trace[1669373303] 'compare'  (duration: 83.969097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:29:26.457947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.835448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T00:29:26.458005Z","caller":"traceutil/trace.go:171","msg":"trace[1951949096] range","detail":"{range_begin:/registry/minions/multinode-246036-m03; range_end:; response_count:0; response_revision:568; }","duration":"190.947215ms","start":"2024-09-24T00:29:26.267051Z","end":"2024-09-24T00:29:26.457998Z","steps":["trace[1951949096] 'agreement among raft nodes before linearized reading'  (duration: 190.765411ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T00:29:26.459161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.845962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-24T00:29:26.459644Z","caller":"traceutil/trace.go:171","msg":"trace[513377949] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:569; }","duration":"189.343952ms","start":"2024-09-24T00:29:26.270291Z","end":"2024-09-24T00:29:26.459635Z","steps":["trace[513377949] 'agreement among raft nodes before linearized reading'  (duration: 188.793829ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T00:29:33.480014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.709315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m03\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-24T00:29:33.480274Z","caller":"traceutil/trace.go:171","msg":"trace[418634358] range","detail":"{range_begin:/registry/minions/multinode-246036-m03; range_end:; response_count:1; response_revision:607; }","duration":"197.94857ms","start":"2024-09-24T00:29:33.282281Z","end":"2024-09-24T00:29:33.480229Z","steps":["trace[418634358] 'range keys from in-memory index tree'  (duration: 197.574146ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:30:20.216772Z","caller":"traceutil/trace.go:171","msg":"trace[1525947365] linearizableReadLoop","detail":"{readStateIndex:736; appliedIndex:735; }","duration":"115.610017ms","start":"2024-09-24T00:30:20.101148Z","end":"2024-09-24T00:30:20.216758Z","steps":["trace[1525947365] 'read index received'  (duration: 115.43229ms)","trace[1525947365] 'applied index is now lower than readState.Index'  (duration: 177.024µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:30:20.216949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.77156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m03\" ","response":"range_response_count:1 size:3119"}
	{"level":"info","ts":"2024-09-24T00:30:20.216973Z","caller":"traceutil/trace.go:171","msg":"trace[974474277] range","detail":"{range_begin:/registry/minions/multinode-246036-m03; range_end:; response_count:1; response_revision:693; }","duration":"115.839059ms","start":"2024-09-24T00:30:20.101128Z","end":"2024-09-24T00:30:20.216967Z","steps":["trace[974474277] 'agreement among raft nodes before linearized reading'  (duration: 115.713147ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:30:20.217141Z","caller":"traceutil/trace.go:171","msg":"trace[1236333319] transaction","detail":"{read_only:false; response_revision:693; number_of_response:1; }","duration":"135.277746ms","start":"2024-09-24T00:30:20.081852Z","end":"2024-09-24T00:30:20.217130Z","steps":["trace[1236333319] 'process raft request'  (duration: 134.764376ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:32:41.014660Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-24T00:32:41.014997Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-246036","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.199:2380"],"advertise-client-urls":["https://192.168.39.199:2379"]}
	{"level":"warn","ts":"2024-09-24T00:32:41.015138Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:32:41.015238Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:32:41.075384Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.199:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:32:41.075437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.199:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T00:32:41.075500Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"adf16ee9d395f7b5","current-leader-member-id":"adf16ee9d395f7b5"}
	{"level":"info","ts":"2024-09-24T00:32:41.080107Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:32:41.080202Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:32:41.080225Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-246036","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.199:2380"],"advertise-client-urls":["https://192.168.39.199:2379"]}
	
	
	==> kernel <==
	 00:36:04 up 8 min,  0 users,  load average: 0.51, 0.28, 0.14
	Linux multinode-246036 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a] <==
	I0924 00:31:53.403635       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:03.403928       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:03.404012       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:03.404191       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:03.404216       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:03.404330       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:03.404350       1 main.go:299] handling current node
	I0924 00:32:13.412003       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:13.412218       1 main.go:299] handling current node
	I0924 00:32:13.412264       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:13.412284       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:13.412467       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:13.412498       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:23.409596       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:23.409663       1 main.go:299] handling current node
	I0924 00:32:23.409741       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:23.409751       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:23.409921       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:23.409939       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:33.406107       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:33.406248       1 main.go:299] handling current node
	I0924 00:32:33.406273       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:33.406279       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:33.406579       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:33.406600       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666] <==
	I0924 00:35:16.312578       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:35:26.310868       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:35:26.311035       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:35:26.311232       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:35:26.311257       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:35:26.311317       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:35:26.311337       1 main.go:299] handling current node
	I0924 00:35:36.311489       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:35:36.311632       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:35:36.311911       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:35:36.311983       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:35:36.312122       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:35:36.312175       1 main.go:299] handling current node
	I0924 00:35:46.311249       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:35:46.311301       1 main.go:299] handling current node
	I0924 00:35:46.311322       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:35:46.311331       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:35:46.311497       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:35:46.311519       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.2.0/24] 
	I0924 00:35:56.310896       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:35:56.310936       1 main.go:299] handling current node
	I0924 00:35:56.310950       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:35:56.310956       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:35:56.311077       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:35:56.311094       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940] <==
	I0924 00:34:23.717420       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 00:34:23.738459       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0924 00:34:23.746105       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 00:34:23.747047       1 aggregator.go:171] initial CRD sync complete...
	I0924 00:34:23.747076       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 00:34:23.747084       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 00:34:23.747090       1 cache.go:39] Caches are synced for autoregister controller
	I0924 00:34:23.766387       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 00:34:23.766418       1 policy_source.go:224] refreshing policies
	I0924 00:34:23.813503       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 00:34:23.814358       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 00:34:23.814758       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 00:34:23.814878       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 00:34:23.816842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 00:34:23.819709       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0924 00:34:23.825377       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0924 00:34:23.855598       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 00:34:24.621322       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 00:34:25.923949       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 00:34:26.101906       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 00:34:26.130397       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 00:34:26.238705       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 00:34:26.257032       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 00:34:27.359565       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 00:34:27.409209       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237] <==
	E0924 00:29:00.850479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56668: use of closed network connection
	E0924 00:29:01.013956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56690: use of closed network connection
	E0924 00:29:01.182169       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56710: use of closed network connection
	E0924 00:29:01.343809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56724: use of closed network connection
	I0924 00:32:41.016518       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0924 00:32:41.028279       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028366       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028403       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028436       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028504       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028540       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028572       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028606       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.037639       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041373       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041468       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041527       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041588       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041649       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041894       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041971       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.042043       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.042104       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.042169       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.043085       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916] <==
	I0924 00:35:26.419154       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.82271ms"
	I0924 00:35:26.419246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.238µs"
	I0924 00:35:27.100419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:35:33.701941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:35:40.341467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:40.359118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:40.593510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:35:40.593947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:41.815323       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:35:41.816031       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-246036-m03\" does not exist"
	I0924 00:35:41.833897       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-246036-m03" podCIDRs=["10.244.2.0/24"]
	I0924 00:35:41.833936       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	E0924 00:35:41.850496       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-246036-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-246036-m03" podCIDRs=["10.244.3.0/24"]
	E0924 00:35:41.850577       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-246036-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-246036-m03"
	E0924 00:35:41.850645       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-246036-m03': failed to patch node CIDR: Node \"multinode-246036-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0924 00:35:41.850707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:41.855904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:42.159119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:42.192172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:42.504846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:51.961821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:01.476837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:36:01.477668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:01.490319       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:02.117301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	
	
	==> kube-controller-manager [f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0] <==
	I0924 00:30:14.350396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:14.587132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:14.587358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:30:15.881365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:30:15.883183       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-246036-m03\" does not exist"
	I0924 00:30:15.906520       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-246036-m03" podCIDRs=["10.244.3.0/24"]
	I0924 00:30:15.906953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:15.907091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:15.922803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:16.123454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:16.444774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:26.254390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:35.459949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:35.460285       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:30:35.474518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:35.902505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:31:15.919098       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:31:15.919576       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m03"
	I0924 00:31:15.934405       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:31:15.975339       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.390271ms"
	I0924 00:31:15.976026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.987µs"
	I0924 00:31:20.978228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:31:20.999608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:31:21.051269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:31:31.129615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	
	
	==> kube-proxy [4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:27:52.391392       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:27:52.402184       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.199"]
	E0924 00:27:52.402346       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:27:52.464414       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:27:52.464531       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:27:52.464568       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:27:52.467441       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:27:52.467825       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:27:52.468010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:27:52.469326       1 config.go:199] "Starting service config controller"
	I0924 00:27:52.469516       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:27:52.469587       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:27:52.469605       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:27:52.470176       1 config.go:328] "Starting node config controller"
	I0924 00:27:52.471284       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:27:52.569665       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:27:52.569787       1 shared_informer.go:320] Caches are synced for service config
	I0924 00:27:52.571819       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:34:25.589300       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:34:25.600227       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.199"]
	E0924 00:34:25.600546       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:34:25.666259       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:34:25.666303       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:34:25.666332       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:34:25.668805       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:34:25.669255       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:34:25.669308       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:34:25.671545       1 config.go:199] "Starting service config controller"
	I0924 00:34:25.671572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:34:25.671591       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:34:25.671594       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:34:25.672142       1 config.go:328] "Starting node config controller"
	I0924 00:34:25.672168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:34:25.772155       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:34:25.772230       1 shared_informer.go:320] Caches are synced for service config
	I0924 00:34:25.773143       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161] <==
	I0924 00:34:21.496905       1 serving.go:386] Generated self-signed cert in-memory
	W0924 00:34:23.670133       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 00:34:23.670208       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 00:34:23.670218       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 00:34:23.670228       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 00:34:23.762483       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 00:34:23.762583       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:34:23.764783       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 00:34:23.764843       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:34:23.765070       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 00:34:23.765175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 00:34:23.868239       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb] <==
	W0924 00:27:45.111890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 00:27:45.111943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.133194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 00:27:45.133305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.142396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 00:27:45.142486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.189928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 00:27:45.190085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.200366       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 00:27:45.200446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.242172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 00:27:45.242314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.250421       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 00:27:45.250602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.268574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 00:27:45.268719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.284142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 00:27:45.284232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.368591       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:27:45.368639       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 00:27:47.454737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:32:41.010512       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0924 00:32:41.010770       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0924 00:32:41.011066       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0924 00:32:41.019560       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 00:34:29 multinode-246036 kubelet[2979]: E0924 00:34:29.722955    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138069722321649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:34:29 multinode-246036 kubelet[2979]: E0924 00:34:29.722979    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138069722321649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:34:39 multinode-246036 kubelet[2979]: E0924 00:34:39.724363    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138079724040200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:34:39 multinode-246036 kubelet[2979]: E0924 00:34:39.724391    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138079724040200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:34:49 multinode-246036 kubelet[2979]: E0924 00:34:49.725997    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138089725581143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:34:49 multinode-246036 kubelet[2979]: E0924 00:34:49.726419    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138089725581143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:34:59 multinode-246036 kubelet[2979]: E0924 00:34:59.734232    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138099733850471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:34:59 multinode-246036 kubelet[2979]: E0924 00:34:59.734269    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138099733850471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:09 multinode-246036 kubelet[2979]: E0924 00:35:09.735801    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138109735407616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:09 multinode-246036 kubelet[2979]: E0924 00:35:09.735859    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138109735407616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:19 multinode-246036 kubelet[2979]: E0924 00:35:19.727883    2979 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 00:35:19 multinode-246036 kubelet[2979]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:35:19 multinode-246036 kubelet[2979]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:35:19 multinode-246036 kubelet[2979]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:35:19 multinode-246036 kubelet[2979]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:35:19 multinode-246036 kubelet[2979]: E0924 00:35:19.737387    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138119737035593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:19 multinode-246036 kubelet[2979]: E0924 00:35:19.737421    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138119737035593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:29 multinode-246036 kubelet[2979]: E0924 00:35:29.739391    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138129738729548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:29 multinode-246036 kubelet[2979]: E0924 00:35:29.739933    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138129738729548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:39 multinode-246036 kubelet[2979]: E0924 00:35:39.743262    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138139742814777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:39 multinode-246036 kubelet[2979]: E0924 00:35:39.743765    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138139742814777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:49 multinode-246036 kubelet[2979]: E0924 00:35:49.746768    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138149746324617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:49 multinode-246036 kubelet[2979]: E0924 00:35:49.747221    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138149746324617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:59 multinode-246036 kubelet[2979]: E0924 00:35:59.751987    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138159751229621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:35:59 multinode-246036 kubelet[2979]: E0924 00:35:59.752157    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138159751229621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:36:03.931709   45375 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19696-7623/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-246036 -n multinode-246036
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-246036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 stop
E0924 00:36:41.432560   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-246036 stop: exit status 82 (2m0.486306483s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-246036-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-246036 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-246036 status: (18.672181512s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr: (3.359777336s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-246036 -n multinode-246036
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-246036 logs -n 25: (1.411691775s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036:/home/docker/cp-test_multinode-246036-m02_multinode-246036.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036 sudo cat                                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m02_multinode-246036.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03:/home/docker/cp-test_multinode-246036-m02_multinode-246036-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036-m03 sudo cat                                   | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m02_multinode-246036-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp testdata/cp-test.txt                                                | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile589421806/001/cp-test_multinode-246036-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036:/home/docker/cp-test_multinode-246036-m03_multinode-246036.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036 sudo cat                                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m03_multinode-246036.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt                       | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02:/home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036-m02 sudo cat                                   | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-246036 node stop m03                                                          | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	| node    | multinode-246036 node start                                                             | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-246036                                                                | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:30 UTC |                     |
	| stop    | -p multinode-246036                                                                     | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:30 UTC |                     |
	| start   | -p multinode-246036                                                                     | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:32 UTC | 24 Sep 24 00:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-246036                                                                | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:36 UTC |                     |
	| node    | multinode-246036 node delete                                                            | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:36 UTC | 24 Sep 24 00:36 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-246036 stop                                                                   | multinode-246036 | jenkins | v1.34.0 | 24 Sep 24 00:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:32:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:32:39.934227   44220 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:32:39.934369   44220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:32:39.934379   44220 out.go:358] Setting ErrFile to fd 2...
	I0924 00:32:39.934384   44220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:32:39.934577   44220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:32:39.935164   44220 out.go:352] Setting JSON to false
	I0924 00:32:39.936060   44220 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4504,"bootTime":1727133456,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:32:39.936126   44220 start.go:139] virtualization: kvm guest
	I0924 00:32:39.938508   44220 out.go:177] * [multinode-246036] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:32:39.939925   44220 notify.go:220] Checking for updates...
	I0924 00:32:39.939953   44220 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:32:39.941274   44220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:32:39.942626   44220 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:32:39.943910   44220 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:32:39.945348   44220 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:32:39.946838   44220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:32:39.948798   44220 config.go:182] Loaded profile config "multinode-246036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:32:39.948937   44220 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:32:39.949437   44220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:32:39.949508   44220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:32:39.965254   44220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I0924 00:32:39.965839   44220 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:32:39.966463   44220 main.go:141] libmachine: Using API Version  1
	I0924 00:32:39.966492   44220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:32:39.966799   44220 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:32:39.966977   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:32:40.003475   44220 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 00:32:40.004844   44220 start.go:297] selected driver: kvm2
	I0924 00:32:40.004862   44220 start.go:901] validating driver "kvm2" against &{Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:32:40.005023   44220 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:32:40.005431   44220 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:32:40.005520   44220 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:32:40.020825   44220 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:32:40.021581   44220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:32:40.021615   44220 cni.go:84] Creating CNI manager for ""
	I0924 00:32:40.021670   44220 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 00:32:40.021739   44220 start.go:340] cluster config:
	{Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-246036 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:32:40.021904   44220 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:32:40.023812   44220 out.go:177] * Starting "multinode-246036" primary control-plane node in "multinode-246036" cluster
	I0924 00:32:40.025116   44220 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:32:40.025187   44220 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 00:32:40.025202   44220 cache.go:56] Caching tarball of preloaded images
	I0924 00:32:40.025294   44220 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:32:40.025309   44220 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 00:32:40.025454   44220 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/config.json ...
	I0924 00:32:40.025726   44220 start.go:360] acquireMachinesLock for multinode-246036: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:32:40.025791   44220 start.go:364] duration metric: took 42.687µs to acquireMachinesLock for "multinode-246036"
	I0924 00:32:40.025812   44220 start.go:96] Skipping create...Using existing machine configuration
	I0924 00:32:40.025824   44220 fix.go:54] fixHost starting: 
	I0924 00:32:40.026117   44220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:32:40.026161   44220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:32:40.040677   44220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43741
	I0924 00:32:40.041099   44220 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:32:40.041586   44220 main.go:141] libmachine: Using API Version  1
	I0924 00:32:40.041606   44220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:32:40.041912   44220 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:32:40.042076   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:32:40.042213   44220 main.go:141] libmachine: (multinode-246036) Calling .GetState
	I0924 00:32:40.043793   44220 fix.go:112] recreateIfNeeded on multinode-246036: state=Running err=<nil>
	W0924 00:32:40.043814   44220 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 00:32:40.045897   44220 out.go:177] * Updating the running kvm2 "multinode-246036" VM ...
	I0924 00:32:40.047284   44220 machine.go:93] provisionDockerMachine start ...
	I0924 00:32:40.047326   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:32:40.047560   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.050211   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.050635   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.050656   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.050806   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.051004   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.051163   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.051294   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.051484   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.051743   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.051756   44220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 00:32:40.173150   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-246036
	
	I0924 00:32:40.173187   44220 main.go:141] libmachine: (multinode-246036) Calling .GetMachineName
	I0924 00:32:40.173455   44220 buildroot.go:166] provisioning hostname "multinode-246036"
	I0924 00:32:40.173484   44220 main.go:141] libmachine: (multinode-246036) Calling .GetMachineName
	I0924 00:32:40.173693   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.176892   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.177279   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.177326   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.177471   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.177642   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.177781   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.177891   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.178119   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.178349   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.178367   44220 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-246036 && echo "multinode-246036" | sudo tee /etc/hostname
	I0924 00:32:40.308737   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-246036
	
	I0924 00:32:40.308766   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.312014   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.312372   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.312396   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.312605   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.312803   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.312939   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.313065   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.313201   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.313409   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.313432   44220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-246036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-246036/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-246036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:32:40.429366   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:32:40.429412   44220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:32:40.429443   44220 buildroot.go:174] setting up certificates
	I0924 00:32:40.429456   44220 provision.go:84] configureAuth start
	I0924 00:32:40.429470   44220 main.go:141] libmachine: (multinode-246036) Calling .GetMachineName
	I0924 00:32:40.429787   44220 main.go:141] libmachine: (multinode-246036) Calling .GetIP
	I0924 00:32:40.432741   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.433176   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.433202   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.433431   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.436028   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.436439   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.436470   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.436648   44220 provision.go:143] copyHostCerts
	I0924 00:32:40.436673   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:32:40.436702   44220 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:32:40.436718   44220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:32:40.436786   44220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:32:40.436875   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:32:40.436893   44220 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:32:40.436899   44220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:32:40.436923   44220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:32:40.436974   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:32:40.436991   44220 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:32:40.436997   44220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:32:40.437022   44220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:32:40.437079   44220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.multinode-246036 san=[127.0.0.1 192.168.39.199 localhost minikube multinode-246036]
	I0924 00:32:40.702282   44220 provision.go:177] copyRemoteCerts
	I0924 00:32:40.702344   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:32:40.702368   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.705236   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.705624   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.705660   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.705886   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.706097   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.706269   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.706424   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:32:40.795328   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0924 00:32:40.795480   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:32:40.819951   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0924 00:32:40.820023   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:32:40.850250   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0924 00:32:40.850316   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0924 00:32:40.876371   44220 provision.go:87] duration metric: took 446.90231ms to configureAuth
	I0924 00:32:40.876397   44220 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:32:40.876645   44220 config.go:182] Loaded profile config "multinode-246036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:32:40.876735   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:32:40.879515   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.879901   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:32:40.879930   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:32:40.880059   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:32:40.880259   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.880457   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:32:40.880618   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:32:40.880784   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:32:40.880981   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:32:40.881003   44220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:34:11.597746   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:34:11.597777   44220 machine.go:96] duration metric: took 1m31.550470894s to provisionDockerMachine
	I0924 00:34:11.597790   44220 start.go:293] postStartSetup for "multinode-246036" (driver="kvm2")
	I0924 00:34:11.597800   44220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:34:11.597817   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.598163   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:34:11.598198   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.601395   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.601884   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.601913   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.602142   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.602358   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.602511   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.602645   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:34:11.691639   44220 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:34:11.695641   44220 command_runner.go:130] > NAME=Buildroot
	I0924 00:34:11.695670   44220 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0924 00:34:11.695676   44220 command_runner.go:130] > ID=buildroot
	I0924 00:34:11.695682   44220 command_runner.go:130] > VERSION_ID=2023.02.9
	I0924 00:34:11.695690   44220 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0924 00:34:11.695738   44220 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:34:11.695754   44220 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:34:11.695823   44220 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:34:11.696038   44220 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:34:11.696060   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /etc/ssl/certs/147932.pem
	I0924 00:34:11.696231   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:34:11.705072   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:34:11.727530   44220 start.go:296] duration metric: took 129.72707ms for postStartSetup
	I0924 00:34:11.727573   44220 fix.go:56] duration metric: took 1m31.701750109s for fixHost
	I0924 00:34:11.727603   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.730147   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.730774   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.730808   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.731028   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.731183   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.731328   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.731465   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.731623   44220 main.go:141] libmachine: Using SSH client type: native
	I0924 00:34:11.731834   44220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.199 22 <nil> <nil>}
	I0924 00:34:11.731849   44220 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:34:11.844908   44220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727138051.820284302
	
	I0924 00:34:11.844936   44220 fix.go:216] guest clock: 1727138051.820284302
	I0924 00:34:11.844945   44220 fix.go:229] Guest: 2024-09-24 00:34:11.820284302 +0000 UTC Remote: 2024-09-24 00:34:11.72757903 +0000 UTC m=+91.827986289 (delta=92.705272ms)
	I0924 00:34:11.844973   44220 fix.go:200] guest clock delta is within tolerance: 92.705272ms
	I0924 00:34:11.844979   44220 start.go:83] releasing machines lock for "multinode-246036", held for 1m31.819175531s
	I0924 00:34:11.845001   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.845332   44220 main.go:141] libmachine: (multinode-246036) Calling .GetIP
	I0924 00:34:11.848206   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.848578   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.848605   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.848760   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.849206   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.849360   44220 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:34:11.849456   44220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:34:11.849505   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.849565   44220 ssh_runner.go:195] Run: cat /version.json
	I0924 00:34:11.849589   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:34:11.852056   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852237   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852485   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.852518   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852668   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.852818   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:11.852843   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:11.852822   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.853018   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.853019   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:34:11.853199   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:34:11.853202   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:34:11.853376   44220 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:34:11.853529   44220 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:34:11.933386   44220 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0924 00:34:11.933563   44220 ssh_runner.go:195] Run: systemctl --version
	I0924 00:34:11.974989   44220 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0924 00:34:11.975682   44220 command_runner.go:130] > systemd 252 (252)
	I0924 00:34:11.975726   44220 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0924 00:34:11.975796   44220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:34:12.138827   44220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 00:34:12.145448   44220 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0924 00:34:12.145882   44220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:34:12.145948   44220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:34:12.154615   44220 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 00:34:12.154643   44220 start.go:495] detecting cgroup driver to use...
	I0924 00:34:12.154720   44220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:34:12.171123   44220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:34:12.184526   44220 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:34:12.184588   44220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:34:12.197873   44220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:34:12.211011   44220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:34:12.365119   44220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:34:12.511923   44220 docker.go:233] disabling docker service ...
	I0924 00:34:12.512004   44220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:34:12.527671   44220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:34:12.540663   44220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:34:12.676686   44220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:34:12.814203   44220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:34:12.827957   44220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:34:12.845948   44220 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0924 00:34:12.846434   44220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 00:34:12.846503   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.856583   44220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:34:12.856640   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.866577   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.876324   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.885996   44220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:34:12.896677   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.906102   44220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.916474   44220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:34:12.926423   44220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:34:12.935892   44220 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0924 00:34:12.936009   44220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:34:12.945361   44220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:34:13.101417   44220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:34:17.018630   44220 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.917174045s)
	I0924 00:34:17.018669   44220 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:34:17.018727   44220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:34:17.023611   44220 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0924 00:34:17.023638   44220 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0924 00:34:17.023648   44220 command_runner.go:130] > Device: 0,22	Inode: 1385        Links: 1
	I0924 00:34:17.023658   44220 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 00:34:17.023666   44220 command_runner.go:130] > Access: 2024-09-24 00:34:16.919541474 +0000
	I0924 00:34:17.023674   44220 command_runner.go:130] > Modify: 2024-09-24 00:34:16.882539011 +0000
	I0924 00:34:17.023682   44220 command_runner.go:130] > Change: 2024-09-24 00:34:16.882539011 +0000
	I0924 00:34:17.023693   44220 command_runner.go:130] >  Birth: -
	I0924 00:34:17.023715   44220 start.go:563] Will wait 60s for crictl version
	I0924 00:34:17.023760   44220 ssh_runner.go:195] Run: which crictl
	I0924 00:34:17.027738   44220 command_runner.go:130] > /usr/bin/crictl
	I0924 00:34:17.027810   44220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:34:17.068130   44220 command_runner.go:130] > Version:  0.1.0
	I0924 00:34:17.068158   44220 command_runner.go:130] > RuntimeName:  cri-o
	I0924 00:34:17.068164   44220 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0924 00:34:17.068171   44220 command_runner.go:130] > RuntimeApiVersion:  v1
	I0924 00:34:17.069157   44220 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:34:17.069225   44220 ssh_runner.go:195] Run: crio --version
	I0924 00:34:17.099071   44220 command_runner.go:130] > crio version 1.29.1
	I0924 00:34:17.099100   44220 command_runner.go:130] > Version:        1.29.1
	I0924 00:34:17.099109   44220 command_runner.go:130] > GitCommit:      unknown
	I0924 00:34:17.099120   44220 command_runner.go:130] > GitCommitDate:  unknown
	I0924 00:34:17.099126   44220 command_runner.go:130] > GitTreeState:   clean
	I0924 00:34:17.099134   44220 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 00:34:17.099140   44220 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 00:34:17.099145   44220 command_runner.go:130] > Compiler:       gc
	I0924 00:34:17.099151   44220 command_runner.go:130] > Platform:       linux/amd64
	I0924 00:34:17.099157   44220 command_runner.go:130] > Linkmode:       dynamic
	I0924 00:34:17.099180   44220 command_runner.go:130] > BuildTags:      
	I0924 00:34:17.099192   44220 command_runner.go:130] >   containers_image_ostree_stub
	I0924 00:34:17.099199   44220 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 00:34:17.099204   44220 command_runner.go:130] >   btrfs_noversion
	I0924 00:34:17.099212   44220 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 00:34:17.099223   44220 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 00:34:17.099229   44220 command_runner.go:130] >   seccomp
	I0924 00:34:17.099240   44220 command_runner.go:130] > LDFlags:          unknown
	I0924 00:34:17.099248   44220 command_runner.go:130] > SeccompEnabled:   true
	I0924 00:34:17.099253   44220 command_runner.go:130] > AppArmorEnabled:  false
	I0924 00:34:17.099320   44220 ssh_runner.go:195] Run: crio --version
	I0924 00:34:17.134179   44220 command_runner.go:130] > crio version 1.29.1
	I0924 00:34:17.134210   44220 command_runner.go:130] > Version:        1.29.1
	I0924 00:34:17.134220   44220 command_runner.go:130] > GitCommit:      unknown
	I0924 00:34:17.134228   44220 command_runner.go:130] > GitCommitDate:  unknown
	I0924 00:34:17.134236   44220 command_runner.go:130] > GitTreeState:   clean
	I0924 00:34:17.134245   44220 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0924 00:34:17.134259   44220 command_runner.go:130] > GoVersion:      go1.21.6
	I0924 00:34:17.134271   44220 command_runner.go:130] > Compiler:       gc
	I0924 00:34:17.134279   44220 command_runner.go:130] > Platform:       linux/amd64
	I0924 00:34:17.134287   44220 command_runner.go:130] > Linkmode:       dynamic
	I0924 00:34:17.134299   44220 command_runner.go:130] > BuildTags:      
	I0924 00:34:17.134309   44220 command_runner.go:130] >   containers_image_ostree_stub
	I0924 00:34:17.134315   44220 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0924 00:34:17.134320   44220 command_runner.go:130] >   btrfs_noversion
	I0924 00:34:17.134328   44220 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0924 00:34:17.134342   44220 command_runner.go:130] >   libdm_no_deferred_remove
	I0924 00:34:17.134352   44220 command_runner.go:130] >   seccomp
	I0924 00:34:17.134360   44220 command_runner.go:130] > LDFlags:          unknown
	I0924 00:34:17.134368   44220 command_runner.go:130] > SeccompEnabled:   true
	I0924 00:34:17.134379   44220 command_runner.go:130] > AppArmorEnabled:  false
	I0924 00:34:17.136353   44220 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 00:34:17.137483   44220 main.go:141] libmachine: (multinode-246036) Calling .GetIP
	I0924 00:34:17.140139   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:17.140497   44220 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:34:17.140526   44220 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:34:17.140861   44220 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:34:17.144846   44220 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0924 00:34:17.144984   44220 kubeadm.go:883] updating cluster {Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:34:17.145132   44220 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:34:17.145181   44220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:34:17.184105   44220 command_runner.go:130] > {
	I0924 00:34:17.184134   44220 command_runner.go:130] >   "images": [
	I0924 00:34:17.184141   44220 command_runner.go:130] >     {
	I0924 00:34:17.184153   44220 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 00:34:17.184158   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184164   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 00:34:17.184168   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184173   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184180   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 00:34:17.184188   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 00:34:17.184192   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184197   44220 command_runner.go:130] >       "size": "87190579",
	I0924 00:34:17.184201   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184208   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184222   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184233   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184242   44220 command_runner.go:130] >     },
	I0924 00:34:17.184247   44220 command_runner.go:130] >     {
	I0924 00:34:17.184257   44220 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 00:34:17.184264   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184276   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 00:34:17.184282   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184288   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184299   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 00:34:17.184314   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 00:34:17.184325   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184350   44220 command_runner.go:130] >       "size": "1363676",
	I0924 00:34:17.184360   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184372   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184381   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184389   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184393   44220 command_runner.go:130] >     },
	I0924 00:34:17.184397   44220 command_runner.go:130] >     {
	I0924 00:34:17.184403   44220 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 00:34:17.184412   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184423   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 00:34:17.184432   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184441   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184456   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 00:34:17.184471   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 00:34:17.184480   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184488   44220 command_runner.go:130] >       "size": "31470524",
	I0924 00:34:17.184492   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184501   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184510   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184520   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184529   44220 command_runner.go:130] >     },
	I0924 00:34:17.184537   44220 command_runner.go:130] >     {
	I0924 00:34:17.184549   44220 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 00:34:17.184559   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184569   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 00:34:17.184575   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184579   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184591   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 00:34:17.184610   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 00:34:17.184622   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184631   44220 command_runner.go:130] >       "size": "63273227",
	I0924 00:34:17.184641   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.184650   44220 command_runner.go:130] >       "username": "nonroot",
	I0924 00:34:17.184659   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184666   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184672   44220 command_runner.go:130] >     },
	I0924 00:34:17.184681   44220 command_runner.go:130] >     {
	I0924 00:34:17.184691   44220 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 00:34:17.184701   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184712   44220 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 00:34:17.184720   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184730   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184742   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 00:34:17.184753   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 00:34:17.184761   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184771   44220 command_runner.go:130] >       "size": "149009664",
	I0924 00:34:17.184780   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.184787   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.184795   44220 command_runner.go:130] >       },
	I0924 00:34:17.184804   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184812   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184821   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184828   44220 command_runner.go:130] >     },
	I0924 00:34:17.184830   44220 command_runner.go:130] >     {
	I0924 00:34:17.184839   44220 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 00:34:17.184848   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.184859   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 00:34:17.184868   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184877   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.184889   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 00:34:17.184903   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 00:34:17.184911   44220 command_runner.go:130] >       ],
	I0924 00:34:17.184919   44220 command_runner.go:130] >       "size": "95237600",
	I0924 00:34:17.184925   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.184934   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.184942   44220 command_runner.go:130] >       },
	I0924 00:34:17.184952   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.184961   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.184970   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.184975   44220 command_runner.go:130] >     },
	I0924 00:34:17.184984   44220 command_runner.go:130] >     {
	I0924 00:34:17.184996   44220 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 00:34:17.185002   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185010   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 00:34:17.185018   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185034   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185048   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 00:34:17.185066   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 00:34:17.185075   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185081   44220 command_runner.go:130] >       "size": "89437508",
	I0924 00:34:17.185088   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.185094   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.185102   44220 command_runner.go:130] >       },
	I0924 00:34:17.185111   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185121   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185130   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.185138   44220 command_runner.go:130] >     },
	I0924 00:34:17.185146   44220 command_runner.go:130] >     {
	I0924 00:34:17.185157   44220 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 00:34:17.185165   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185173   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 00:34:17.185177   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185185   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185207   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 00:34:17.185221   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 00:34:17.185231   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185241   44220 command_runner.go:130] >       "size": "92733849",
	I0924 00:34:17.185250   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.185257   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185261   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185266   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.185289   44220 command_runner.go:130] >     },
	I0924 00:34:17.185295   44220 command_runner.go:130] >     {
	I0924 00:34:17.185305   44220 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 00:34:17.185312   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185319   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 00:34:17.185327   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185337   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185360   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 00:34:17.185376   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 00:34:17.185384   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185391   44220 command_runner.go:130] >       "size": "68420934",
	I0924 00:34:17.185397   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.185404   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.185412   44220 command_runner.go:130] >       },
	I0924 00:34:17.185419   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185426   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185430   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.185433   44220 command_runner.go:130] >     },
	I0924 00:34:17.185439   44220 command_runner.go:130] >     {
	I0924 00:34:17.185452   44220 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 00:34:17.185463   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.185470   44220 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 00:34:17.185474   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185480   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.185490   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 00:34:17.185502   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 00:34:17.185511   44220 command_runner.go:130] >       ],
	I0924 00:34:17.185521   44220 command_runner.go:130] >       "size": "742080",
	I0924 00:34:17.185529   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.185534   44220 command_runner.go:130] >         "value": "65535"
	I0924 00:34:17.185540   44220 command_runner.go:130] >       },
	I0924 00:34:17.185548   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.185554   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.185561   44220 command_runner.go:130] >       "pinned": true
	I0924 00:34:17.185567   44220 command_runner.go:130] >     }
	I0924 00:34:17.185573   44220 command_runner.go:130] >   ]
	I0924 00:34:17.185578   44220 command_runner.go:130] > }
	I0924 00:34:17.185821   44220 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:34:17.185836   44220 crio.go:433] Images already preloaded, skipping extraction
	I0924 00:34:17.185906   44220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:34:17.216110   44220 command_runner.go:130] > {
	I0924 00:34:17.216132   44220 command_runner.go:130] >   "images": [
	I0924 00:34:17.216139   44220 command_runner.go:130] >     {
	I0924 00:34:17.216150   44220 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0924 00:34:17.216157   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216164   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0924 00:34:17.216168   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216175   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216185   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0924 00:34:17.216196   44220 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0924 00:34:17.216205   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216213   44220 command_runner.go:130] >       "size": "87190579",
	I0924 00:34:17.216220   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216226   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216250   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216290   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216299   44220 command_runner.go:130] >     },
	I0924 00:34:17.216306   44220 command_runner.go:130] >     {
	I0924 00:34:17.216316   44220 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0924 00:34:17.216335   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216346   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0924 00:34:17.216354   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216362   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216374   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0924 00:34:17.216387   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0924 00:34:17.216394   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216405   44220 command_runner.go:130] >       "size": "1363676",
	I0924 00:34:17.216413   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216424   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216433   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216445   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216456   44220 command_runner.go:130] >     },
	I0924 00:34:17.216463   44220 command_runner.go:130] >     {
	I0924 00:34:17.216476   44220 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0924 00:34:17.216486   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216500   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0924 00:34:17.216512   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216518   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216531   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0924 00:34:17.216547   44220 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0924 00:34:17.216556   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216564   44220 command_runner.go:130] >       "size": "31470524",
	I0924 00:34:17.216573   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216581   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216590   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216597   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216605   44220 command_runner.go:130] >     },
	I0924 00:34:17.216610   44220 command_runner.go:130] >     {
	I0924 00:34:17.216621   44220 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0924 00:34:17.216630   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216639   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0924 00:34:17.216648   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216655   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216671   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0924 00:34:17.216690   44220 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0924 00:34:17.216698   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216706   44220 command_runner.go:130] >       "size": "63273227",
	I0924 00:34:17.216715   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.216724   44220 command_runner.go:130] >       "username": "nonroot",
	I0924 00:34:17.216738   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216749   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216756   44220 command_runner.go:130] >     },
	I0924 00:34:17.216765   44220 command_runner.go:130] >     {
	I0924 00:34:17.216776   44220 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0924 00:34:17.216785   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216796   44220 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0924 00:34:17.216805   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216811   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216826   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0924 00:34:17.216841   44220 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0924 00:34:17.216849   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216857   44220 command_runner.go:130] >       "size": "149009664",
	I0924 00:34:17.216866   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.216873   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.216881   44220 command_runner.go:130] >       },
	I0924 00:34:17.216889   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.216899   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.216908   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.216914   44220 command_runner.go:130] >     },
	I0924 00:34:17.216923   44220 command_runner.go:130] >     {
	I0924 00:34:17.216935   44220 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0924 00:34:17.216945   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.216955   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0924 00:34:17.216963   44220 command_runner.go:130] >       ],
	I0924 00:34:17.216971   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.216986   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0924 00:34:17.217000   44220 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0924 00:34:17.217009   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217016   44220 command_runner.go:130] >       "size": "95237600",
	I0924 00:34:17.217025   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217032   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.217041   44220 command_runner.go:130] >       },
	I0924 00:34:17.217049   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217058   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217066   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217074   44220 command_runner.go:130] >     },
	I0924 00:34:17.217081   44220 command_runner.go:130] >     {
	I0924 00:34:17.217093   44220 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0924 00:34:17.217101   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217111   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0924 00:34:17.217120   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217128   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217144   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0924 00:34:17.217159   44220 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0924 00:34:17.217170   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217180   44220 command_runner.go:130] >       "size": "89437508",
	I0924 00:34:17.217188   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217198   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.217207   44220 command_runner.go:130] >       },
	I0924 00:34:17.217214   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217225   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217236   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217245   44220 command_runner.go:130] >     },
	I0924 00:34:17.217251   44220 command_runner.go:130] >     {
	I0924 00:34:17.217262   44220 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0924 00:34:17.217276   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217286   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0924 00:34:17.217294   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217301   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217349   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0924 00:34:17.217365   44220 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0924 00:34:17.217371   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217377   44220 command_runner.go:130] >       "size": "92733849",
	I0924 00:34:17.217385   44220 command_runner.go:130] >       "uid": null,
	I0924 00:34:17.217394   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217403   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217412   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217426   44220 command_runner.go:130] >     },
	I0924 00:34:17.217435   44220 command_runner.go:130] >     {
	I0924 00:34:17.217446   44220 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0924 00:34:17.217455   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217464   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0924 00:34:17.217472   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217479   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217494   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0924 00:34:17.217510   44220 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0924 00:34:17.217518   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217526   44220 command_runner.go:130] >       "size": "68420934",
	I0924 00:34:17.217534   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217542   44220 command_runner.go:130] >         "value": "0"
	I0924 00:34:17.217551   44220 command_runner.go:130] >       },
	I0924 00:34:17.217560   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217569   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217579   44220 command_runner.go:130] >       "pinned": false
	I0924 00:34:17.217587   44220 command_runner.go:130] >     },
	I0924 00:34:17.217593   44220 command_runner.go:130] >     {
	I0924 00:34:17.217606   44220 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0924 00:34:17.217615   44220 command_runner.go:130] >       "repoTags": [
	I0924 00:34:17.217624   44220 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0924 00:34:17.217632   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217640   44220 command_runner.go:130] >       "repoDigests": [
	I0924 00:34:17.217654   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0924 00:34:17.217673   44220 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0924 00:34:17.217681   44220 command_runner.go:130] >       ],
	I0924 00:34:17.217688   44220 command_runner.go:130] >       "size": "742080",
	I0924 00:34:17.217697   44220 command_runner.go:130] >       "uid": {
	I0924 00:34:17.217705   44220 command_runner.go:130] >         "value": "65535"
	I0924 00:34:17.217714   44220 command_runner.go:130] >       },
	I0924 00:34:17.217723   44220 command_runner.go:130] >       "username": "",
	I0924 00:34:17.217732   44220 command_runner.go:130] >       "spec": null,
	I0924 00:34:17.217739   44220 command_runner.go:130] >       "pinned": true
	I0924 00:34:17.217748   44220 command_runner.go:130] >     }
	I0924 00:34:17.217754   44220 command_runner.go:130] >   ]
	I0924 00:34:17.217762   44220 command_runner.go:130] > }
	I0924 00:34:17.218227   44220 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 00:34:17.218242   44220 cache_images.go:84] Images are preloaded, skipping loading
	I0924 00:34:17.218250   44220 kubeadm.go:934] updating node { 192.168.39.199 8443 v1.31.1 crio true true} ...
	I0924 00:34:17.218386   44220 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-246036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:34:17.218459   44220 ssh_runner.go:195] Run: crio config
	I0924 00:34:17.259799   44220 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0924 00:34:17.259829   44220 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0924 00:34:17.259841   44220 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0924 00:34:17.259846   44220 command_runner.go:130] > #
	I0924 00:34:17.259856   44220 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0924 00:34:17.259865   44220 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0924 00:34:17.259874   44220 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0924 00:34:17.259886   44220 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0924 00:34:17.259893   44220 command_runner.go:130] > # reload'.
	I0924 00:34:17.259907   44220 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0924 00:34:17.259917   44220 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0924 00:34:17.259930   44220 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0924 00:34:17.259939   44220 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0924 00:34:17.259948   44220 command_runner.go:130] > [crio]
	I0924 00:34:17.259956   44220 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0924 00:34:17.259966   44220 command_runner.go:130] > # containers images, in this directory.
	I0924 00:34:17.259974   44220 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0924 00:34:17.259995   44220 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0924 00:34:17.260008   44220 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0924 00:34:17.260019   44220 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0924 00:34:17.260027   44220 command_runner.go:130] > # imagestore = ""
	I0924 00:34:17.260039   44220 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0924 00:34:17.260052   44220 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0924 00:34:17.260062   44220 command_runner.go:130] > storage_driver = "overlay"
	I0924 00:34:17.260071   44220 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0924 00:34:17.260084   44220 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0924 00:34:17.260094   44220 command_runner.go:130] > storage_option = [
	I0924 00:34:17.260101   44220 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0924 00:34:17.260109   44220 command_runner.go:130] > ]
	I0924 00:34:17.260121   44220 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0924 00:34:17.260136   44220 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0924 00:34:17.260143   44220 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0924 00:34:17.260152   44220 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0924 00:34:17.260166   44220 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0924 00:34:17.260174   44220 command_runner.go:130] > # always happen on a node reboot
	I0924 00:34:17.260183   44220 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0924 00:34:17.260203   44220 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0924 00:34:17.260215   44220 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0924 00:34:17.260228   44220 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0924 00:34:17.260236   44220 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0924 00:34:17.260250   44220 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0924 00:34:17.260265   44220 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0924 00:34:17.260272   44220 command_runner.go:130] > # internal_wipe = true
	I0924 00:34:17.260283   44220 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0924 00:34:17.260293   44220 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0924 00:34:17.260307   44220 command_runner.go:130] > # internal_repair = false
	I0924 00:34:17.260319   44220 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0924 00:34:17.260347   44220 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0924 00:34:17.260358   44220 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0924 00:34:17.260370   44220 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0924 00:34:17.260382   44220 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0924 00:34:17.260387   44220 command_runner.go:130] > [crio.api]
	I0924 00:34:17.260400   44220 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0924 00:34:17.260408   44220 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0924 00:34:17.260421   44220 command_runner.go:130] > # IP address on which the stream server will listen.
	I0924 00:34:17.260429   44220 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0924 00:34:17.260440   44220 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0924 00:34:17.260450   44220 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0924 00:34:17.260456   44220 command_runner.go:130] > # stream_port = "0"
	I0924 00:34:17.260466   44220 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0924 00:34:17.260473   44220 command_runner.go:130] > # stream_enable_tls = false
	I0924 00:34:17.260483   44220 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0924 00:34:17.260492   44220 command_runner.go:130] > # stream_idle_timeout = ""
	I0924 00:34:17.260505   44220 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0924 00:34:17.260515   44220 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0924 00:34:17.260525   44220 command_runner.go:130] > # minutes.
	I0924 00:34:17.260531   44220 command_runner.go:130] > # stream_tls_cert = ""
	I0924 00:34:17.260543   44220 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0924 00:34:17.260554   44220 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0924 00:34:17.260564   44220 command_runner.go:130] > # stream_tls_key = ""
	I0924 00:34:17.260593   44220 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0924 00:34:17.260608   44220 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0924 00:34:17.260628   44220 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0924 00:34:17.260638   44220 command_runner.go:130] > # stream_tls_ca = ""
	I0924 00:34:17.260649   44220 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 00:34:17.260659   44220 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0924 00:34:17.260670   44220 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0924 00:34:17.260680   44220 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0924 00:34:17.260690   44220 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0924 00:34:17.260702   44220 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0924 00:34:17.260714   44220 command_runner.go:130] > [crio.runtime]
	I0924 00:34:17.260724   44220 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0924 00:34:17.260735   44220 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0924 00:34:17.260743   44220 command_runner.go:130] > # "nofile=1024:2048"
	I0924 00:34:17.260755   44220 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0924 00:34:17.260765   44220 command_runner.go:130] > # default_ulimits = [
	I0924 00:34:17.260770   44220 command_runner.go:130] > # ]
	I0924 00:34:17.260783   44220 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0924 00:34:17.260791   44220 command_runner.go:130] > # no_pivot = false
	I0924 00:34:17.260804   44220 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0924 00:34:17.260816   44220 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0924 00:34:17.260823   44220 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0924 00:34:17.260837   44220 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0924 00:34:17.260849   44220 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0924 00:34:17.260861   44220 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 00:34:17.260878   44220 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0924 00:34:17.260888   44220 command_runner.go:130] > # Cgroup setting for conmon
	I0924 00:34:17.260898   44220 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0924 00:34:17.260909   44220 command_runner.go:130] > conmon_cgroup = "pod"
	I0924 00:34:17.260918   44220 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0924 00:34:17.260929   44220 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0924 00:34:17.260939   44220 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0924 00:34:17.260948   44220 command_runner.go:130] > conmon_env = [
	I0924 00:34:17.260957   44220 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 00:34:17.260965   44220 command_runner.go:130] > ]
	I0924 00:34:17.260974   44220 command_runner.go:130] > # Additional environment variables to set for all the
	I0924 00:34:17.260984   44220 command_runner.go:130] > # containers. These are overridden if set in the
	I0924 00:34:17.260998   44220 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0924 00:34:17.261008   44220 command_runner.go:130] > # default_env = [
	I0924 00:34:17.261014   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261023   44220 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0924 00:34:17.261034   44220 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0924 00:34:17.261044   44220 command_runner.go:130] > # selinux = false
	I0924 00:34:17.261053   44220 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0924 00:34:17.261065   44220 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0924 00:34:17.261077   44220 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0924 00:34:17.261088   44220 command_runner.go:130] > # seccomp_profile = ""
	I0924 00:34:17.261099   44220 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0924 00:34:17.261108   44220 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0924 00:34:17.261120   44220 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0924 00:34:17.261130   44220 command_runner.go:130] > # which might increase security.
	I0924 00:34:17.261137   44220 command_runner.go:130] > # This option is currently deprecated,
	I0924 00:34:17.261148   44220 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0924 00:34:17.261162   44220 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0924 00:34:17.261176   44220 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0924 00:34:17.261186   44220 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0924 00:34:17.261199   44220 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0924 00:34:17.261212   44220 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0924 00:34:17.261240   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.261255   44220 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0924 00:34:17.261267   44220 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0924 00:34:17.261275   44220 command_runner.go:130] > # the cgroup blockio controller.
	I0924 00:34:17.261282   44220 command_runner.go:130] > # blockio_config_file = ""
	I0924 00:34:17.261296   44220 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0924 00:34:17.261306   44220 command_runner.go:130] > # blockio parameters.
	I0924 00:34:17.261313   44220 command_runner.go:130] > # blockio_reload = false
	I0924 00:34:17.261324   44220 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0924 00:34:17.261332   44220 command_runner.go:130] > # irqbalance daemon.
	I0924 00:34:17.261340   44220 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0924 00:34:17.261353   44220 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0924 00:34:17.261365   44220 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0924 00:34:17.261378   44220 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0924 00:34:17.261387   44220 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0924 00:34:17.261400   44220 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0924 00:34:17.261408   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.261417   44220 command_runner.go:130] > # rdt_config_file = ""
	I0924 00:34:17.261426   44220 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0924 00:34:17.261435   44220 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0924 00:34:17.261507   44220 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0924 00:34:17.261525   44220 command_runner.go:130] > # separate_pull_cgroup = ""
	I0924 00:34:17.261535   44220 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0924 00:34:17.261547   44220 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0924 00:34:17.261556   44220 command_runner.go:130] > # will be added.
	I0924 00:34:17.261564   44220 command_runner.go:130] > # default_capabilities = [
	I0924 00:34:17.261578   44220 command_runner.go:130] > # 	"CHOWN",
	I0924 00:34:17.261585   44220 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0924 00:34:17.261595   44220 command_runner.go:130] > # 	"FSETID",
	I0924 00:34:17.261601   44220 command_runner.go:130] > # 	"FOWNER",
	I0924 00:34:17.261610   44220 command_runner.go:130] > # 	"SETGID",
	I0924 00:34:17.261615   44220 command_runner.go:130] > # 	"SETUID",
	I0924 00:34:17.261621   44220 command_runner.go:130] > # 	"SETPCAP",
	I0924 00:34:17.261629   44220 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0924 00:34:17.261638   44220 command_runner.go:130] > # 	"KILL",
	I0924 00:34:17.261643   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261655   44220 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0924 00:34:17.261668   44220 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0924 00:34:17.261678   44220 command_runner.go:130] > # add_inheritable_capabilities = false
	I0924 00:34:17.261688   44220 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0924 00:34:17.261699   44220 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 00:34:17.261708   44220 command_runner.go:130] > default_sysctls = [
	I0924 00:34:17.261719   44220 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0924 00:34:17.261727   44220 command_runner.go:130] > ]
	I0924 00:34:17.261734   44220 command_runner.go:130] > # List of devices on the host that a
	I0924 00:34:17.261746   44220 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0924 00:34:17.261753   44220 command_runner.go:130] > # allowed_devices = [
	I0924 00:34:17.261762   44220 command_runner.go:130] > # 	"/dev/fuse",
	I0924 00:34:17.261767   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261776   44220 command_runner.go:130] > # List of additional devices. specified as
	I0924 00:34:17.261787   44220 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0924 00:34:17.261799   44220 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0924 00:34:17.261810   44220 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0924 00:34:17.261820   44220 command_runner.go:130] > # additional_devices = [
	I0924 00:34:17.261825   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261838   44220 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0924 00:34:17.261846   44220 command_runner.go:130] > # cdi_spec_dirs = [
	I0924 00:34:17.261853   44220 command_runner.go:130] > # 	"/etc/cdi",
	I0924 00:34:17.261859   44220 command_runner.go:130] > # 	"/var/run/cdi",
	I0924 00:34:17.261868   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261877   44220 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0924 00:34:17.261890   44220 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0924 00:34:17.261899   44220 command_runner.go:130] > # Defaults to false.
	I0924 00:34:17.261907   44220 command_runner.go:130] > # device_ownership_from_security_context = false
	I0924 00:34:17.261920   44220 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0924 00:34:17.261932   44220 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0924 00:34:17.261942   44220 command_runner.go:130] > # hooks_dir = [
	I0924 00:34:17.261950   44220 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0924 00:34:17.261958   44220 command_runner.go:130] > # ]
	I0924 00:34:17.261969   44220 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0924 00:34:17.261982   44220 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0924 00:34:17.261993   44220 command_runner.go:130] > # its default mounts from the following two files:
	I0924 00:34:17.261999   44220 command_runner.go:130] > #
	I0924 00:34:17.262011   44220 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0924 00:34:17.262023   44220 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0924 00:34:17.262035   44220 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0924 00:34:17.262042   44220 command_runner.go:130] > #
	I0924 00:34:17.262051   44220 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0924 00:34:17.262062   44220 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0924 00:34:17.262074   44220 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0924 00:34:17.262085   44220 command_runner.go:130] > #      only add mounts it finds in this file.
	I0924 00:34:17.262093   44220 command_runner.go:130] > #
	I0924 00:34:17.262099   44220 command_runner.go:130] > # default_mounts_file = ""
	I0924 00:34:17.262109   44220 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0924 00:34:17.262129   44220 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0924 00:34:17.262137   44220 command_runner.go:130] > pids_limit = 1024
	I0924 00:34:17.262145   44220 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0924 00:34:17.262158   44220 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0924 00:34:17.262171   44220 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0924 00:34:17.262187   44220 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0924 00:34:17.262197   44220 command_runner.go:130] > # log_size_max = -1
	I0924 00:34:17.262210   44220 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0924 00:34:17.262220   44220 command_runner.go:130] > # log_to_journald = false
	I0924 00:34:17.262232   44220 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0924 00:34:17.262244   44220 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0924 00:34:17.262255   44220 command_runner.go:130] > # Path to directory for container attach sockets.
	I0924 00:34:17.262266   44220 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0924 00:34:17.262274   44220 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0924 00:34:17.262284   44220 command_runner.go:130] > # bind_mount_prefix = ""
	I0924 00:34:17.262297   44220 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0924 00:34:17.262306   44220 command_runner.go:130] > # read_only = false
	I0924 00:34:17.262318   44220 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0924 00:34:17.262330   44220 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0924 00:34:17.262340   44220 command_runner.go:130] > # live configuration reload.
	I0924 00:34:17.262351   44220 command_runner.go:130] > # log_level = "info"
	I0924 00:34:17.262363   44220 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0924 00:34:17.262374   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.262384   44220 command_runner.go:130] > # log_filter = ""
	I0924 00:34:17.262396   44220 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0924 00:34:17.262409   44220 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0924 00:34:17.262419   44220 command_runner.go:130] > # separated by comma.
	I0924 00:34:17.262433   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262443   44220 command_runner.go:130] > # uid_mappings = ""
	I0924 00:34:17.262458   44220 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0924 00:34:17.262471   44220 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0924 00:34:17.262481   44220 command_runner.go:130] > # separated by comma.
	I0924 00:34:17.262496   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262505   44220 command_runner.go:130] > # gid_mappings = ""
	I0924 00:34:17.262517   44220 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0924 00:34:17.262530   44220 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 00:34:17.262548   44220 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 00:34:17.262563   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262578   44220 command_runner.go:130] > # minimum_mappable_uid = -1
	I0924 00:34:17.262589   44220 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0924 00:34:17.262601   44220 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0924 00:34:17.262614   44220 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0924 00:34:17.262628   44220 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0924 00:34:17.262638   44220 command_runner.go:130] > # minimum_mappable_gid = -1
	I0924 00:34:17.262649   44220 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0924 00:34:17.262662   44220 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0924 00:34:17.262673   44220 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0924 00:34:17.262681   44220 command_runner.go:130] > # ctr_stop_timeout = 30
	I0924 00:34:17.262691   44220 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0924 00:34:17.262701   44220 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0924 00:34:17.262710   44220 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0924 00:34:17.262721   44220 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0924 00:34:17.262730   44220 command_runner.go:130] > drop_infra_ctr = false
	I0924 00:34:17.262742   44220 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0924 00:34:17.262753   44220 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0924 00:34:17.262767   44220 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0924 00:34:17.262777   44220 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0924 00:34:17.262791   44220 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0924 00:34:17.262803   44220 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0924 00:34:17.262815   44220 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0924 00:34:17.262826   44220 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0924 00:34:17.262834   44220 command_runner.go:130] > # shared_cpuset = ""
	I0924 00:34:17.262842   44220 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0924 00:34:17.262852   44220 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0924 00:34:17.262861   44220 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0924 00:34:17.262874   44220 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0924 00:34:17.262884   44220 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0924 00:34:17.262895   44220 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0924 00:34:17.262908   44220 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0924 00:34:17.262917   44220 command_runner.go:130] > # enable_criu_support = false
	I0924 00:34:17.262928   44220 command_runner.go:130] > # Enable/disable the generation of the container,
	I0924 00:34:17.262941   44220 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0924 00:34:17.262951   44220 command_runner.go:130] > # enable_pod_events = false
	I0924 00:34:17.262962   44220 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 00:34:17.262974   44220 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0924 00:34:17.262983   44220 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0924 00:34:17.262992   44220 command_runner.go:130] > # default_runtime = "runc"
	I0924 00:34:17.263002   44220 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0924 00:34:17.263015   44220 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0924 00:34:17.263031   44220 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0924 00:34:17.263043   44220 command_runner.go:130] > # creation as a file is not desired either.
	I0924 00:34:17.263059   44220 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0924 00:34:17.263069   44220 command_runner.go:130] > # the hostname is being managed dynamically.
	I0924 00:34:17.263078   44220 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0924 00:34:17.263085   44220 command_runner.go:130] > # ]
	I0924 00:34:17.263095   44220 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0924 00:34:17.263107   44220 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0924 00:34:17.263118   44220 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0924 00:34:17.263128   44220 command_runner.go:130] > # Each entry in the table should follow the format:
	I0924 00:34:17.263135   44220 command_runner.go:130] > #
	I0924 00:34:17.263142   44220 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0924 00:34:17.263152   44220 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0924 00:34:17.263189   44220 command_runner.go:130] > # runtime_type = "oci"
	I0924 00:34:17.263198   44220 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0924 00:34:17.263205   44220 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0924 00:34:17.263215   44220 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0924 00:34:17.263224   44220 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0924 00:34:17.263232   44220 command_runner.go:130] > # monitor_env = []
	I0924 00:34:17.263242   44220 command_runner.go:130] > # privileged_without_host_devices = false
	I0924 00:34:17.263252   44220 command_runner.go:130] > # allowed_annotations = []
	I0924 00:34:17.263260   44220 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0924 00:34:17.263269   44220 command_runner.go:130] > # Where:
	I0924 00:34:17.263276   44220 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0924 00:34:17.263288   44220 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0924 00:34:17.263297   44220 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0924 00:34:17.263306   44220 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0924 00:34:17.263311   44220 command_runner.go:130] > #   in $PATH.
	I0924 00:34:17.263320   44220 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0924 00:34:17.263327   44220 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0924 00:34:17.263342   44220 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0924 00:34:17.263347   44220 command_runner.go:130] > #   state.
	I0924 00:34:17.263356   44220 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0924 00:34:17.263364   44220 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0924 00:34:17.263375   44220 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0924 00:34:17.263385   44220 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0924 00:34:17.263395   44220 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0924 00:34:17.263404   44220 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0924 00:34:17.263410   44220 command_runner.go:130] > #   The currently recognized values are:
	I0924 00:34:17.263419   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0924 00:34:17.263429   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0924 00:34:17.263437   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0924 00:34:17.263446   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0924 00:34:17.263456   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0924 00:34:17.263469   44220 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0924 00:34:17.263482   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0924 00:34:17.263495   44220 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0924 00:34:17.263506   44220 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0924 00:34:17.263517   44220 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0924 00:34:17.263526   44220 command_runner.go:130] > #   deprecated option "conmon".
	I0924 00:34:17.263535   44220 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0924 00:34:17.263545   44220 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0924 00:34:17.263558   44220 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0924 00:34:17.263575   44220 command_runner.go:130] > #   should be moved to the container's cgroup
	I0924 00:34:17.263588   44220 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0924 00:34:17.263598   44220 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0924 00:34:17.263609   44220 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0924 00:34:17.263619   44220 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0924 00:34:17.263627   44220 command_runner.go:130] > #
	I0924 00:34:17.263637   44220 command_runner.go:130] > # Using the seccomp notifier feature:
	I0924 00:34:17.263646   44220 command_runner.go:130] > #
	I0924 00:34:17.263659   44220 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0924 00:34:17.263672   44220 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0924 00:34:17.263680   44220 command_runner.go:130] > #
	I0924 00:34:17.263692   44220 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0924 00:34:17.263703   44220 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0924 00:34:17.263711   44220 command_runner.go:130] > #
	I0924 00:34:17.263720   44220 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0924 00:34:17.263729   44220 command_runner.go:130] > # feature.
	I0924 00:34:17.263737   44220 command_runner.go:130] > #
	I0924 00:34:17.263749   44220 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0924 00:34:17.263761   44220 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0924 00:34:17.263772   44220 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0924 00:34:17.263784   44220 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0924 00:34:17.263795   44220 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0924 00:34:17.263802   44220 command_runner.go:130] > #
	I0924 00:34:17.263812   44220 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0924 00:34:17.263823   44220 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0924 00:34:17.263829   44220 command_runner.go:130] > #
	I0924 00:34:17.263840   44220 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0924 00:34:17.263850   44220 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0924 00:34:17.263857   44220 command_runner.go:130] > #
	I0924 00:34:17.263866   44220 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0924 00:34:17.263877   44220 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0924 00:34:17.263885   44220 command_runner.go:130] > # limitation.
	I0924 00:34:17.263895   44220 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0924 00:34:17.263904   44220 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0924 00:34:17.263909   44220 command_runner.go:130] > runtime_type = "oci"
	I0924 00:34:17.263915   44220 command_runner.go:130] > runtime_root = "/run/runc"
	I0924 00:34:17.263921   44220 command_runner.go:130] > runtime_config_path = ""
	I0924 00:34:17.263928   44220 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0924 00:34:17.263938   44220 command_runner.go:130] > monitor_cgroup = "pod"
	I0924 00:34:17.263947   44220 command_runner.go:130] > monitor_exec_cgroup = ""
	I0924 00:34:17.263956   44220 command_runner.go:130] > monitor_env = [
	I0924 00:34:17.263966   44220 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0924 00:34:17.263973   44220 command_runner.go:130] > ]
	I0924 00:34:17.263979   44220 command_runner.go:130] > privileged_without_host_devices = false
	I0924 00:34:17.263987   44220 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0924 00:34:17.263998   44220 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0924 00:34:17.264009   44220 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0924 00:34:17.264023   44220 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0924 00:34:17.264038   44220 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0924 00:34:17.264048   44220 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0924 00:34:17.264067   44220 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0924 00:34:17.264082   44220 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0924 00:34:17.264093   44220 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0924 00:34:17.264107   44220 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0924 00:34:17.264116   44220 command_runner.go:130] > # Example:
	I0924 00:34:17.264124   44220 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0924 00:34:17.264134   44220 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0924 00:34:17.264141   44220 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0924 00:34:17.264152   44220 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0924 00:34:17.264160   44220 command_runner.go:130] > # cpuset = 0
	I0924 00:34:17.264170   44220 command_runner.go:130] > # cpushares = "0-1"
	I0924 00:34:17.264177   44220 command_runner.go:130] > # Where:
	I0924 00:34:17.264182   44220 command_runner.go:130] > # The workload name is workload-type.
	I0924 00:34:17.264190   44220 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0924 00:34:17.264198   44220 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0924 00:34:17.264206   44220 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0924 00:34:17.264214   44220 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0924 00:34:17.264221   44220 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0924 00:34:17.264230   44220 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0924 00:34:17.264239   44220 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0924 00:34:17.264246   44220 command_runner.go:130] > # Default value is set to true
	I0924 00:34:17.264251   44220 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0924 00:34:17.264259   44220 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0924 00:34:17.264265   44220 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0924 00:34:17.264270   44220 command_runner.go:130] > # Default value is set to 'false'
	I0924 00:34:17.264274   44220 command_runner.go:130] > # disable_hostport_mapping = false
	I0924 00:34:17.264280   44220 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0924 00:34:17.264283   44220 command_runner.go:130] > #
	I0924 00:34:17.264288   44220 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0924 00:34:17.264294   44220 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0924 00:34:17.264302   44220 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0924 00:34:17.264313   44220 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0924 00:34:17.264321   44220 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0924 00:34:17.264325   44220 command_runner.go:130] > [crio.image]
	I0924 00:34:17.264345   44220 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0924 00:34:17.264352   44220 command_runner.go:130] > # default_transport = "docker://"
	I0924 00:34:17.264365   44220 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0924 00:34:17.264374   44220 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0924 00:34:17.264381   44220 command_runner.go:130] > # global_auth_file = ""
	I0924 00:34:17.264389   44220 command_runner.go:130] > # The image used to instantiate infra containers.
	I0924 00:34:17.264398   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.264405   44220 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0924 00:34:17.264411   44220 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0924 00:34:17.264416   44220 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0924 00:34:17.264420   44220 command_runner.go:130] > # This option supports live configuration reload.
	I0924 00:34:17.264424   44220 command_runner.go:130] > # pause_image_auth_file = ""
	I0924 00:34:17.264429   44220 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0924 00:34:17.264434   44220 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0924 00:34:17.264439   44220 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0924 00:34:17.264444   44220 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0924 00:34:17.264449   44220 command_runner.go:130] > # pause_command = "/pause"
	I0924 00:34:17.264455   44220 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0924 00:34:17.264460   44220 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0924 00:34:17.264465   44220 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0924 00:34:17.264470   44220 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0924 00:34:17.264476   44220 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0924 00:34:17.264481   44220 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0924 00:34:17.264484   44220 command_runner.go:130] > # pinned_images = [
	I0924 00:34:17.264487   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264493   44220 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0924 00:34:17.264498   44220 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0924 00:34:17.264503   44220 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0924 00:34:17.264509   44220 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0924 00:34:17.264514   44220 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0924 00:34:17.264521   44220 command_runner.go:130] > # signature_policy = ""
	I0924 00:34:17.264526   44220 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0924 00:34:17.264532   44220 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0924 00:34:17.264540   44220 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0924 00:34:17.264549   44220 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0924 00:34:17.264555   44220 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0924 00:34:17.264561   44220 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0924 00:34:17.264577   44220 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0924 00:34:17.264586   44220 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0924 00:34:17.264592   44220 command_runner.go:130] > # changing them here.
	I0924 00:34:17.264596   44220 command_runner.go:130] > # insecure_registries = [
	I0924 00:34:17.264601   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264608   44220 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0924 00:34:17.264615   44220 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0924 00:34:17.264619   44220 command_runner.go:130] > # image_volumes = "mkdir"
	I0924 00:34:17.264624   44220 command_runner.go:130] > # Temporary directory to use for storing big files
	I0924 00:34:17.264629   44220 command_runner.go:130] > # big_files_temporary_dir = ""
	I0924 00:34:17.264637   44220 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0924 00:34:17.264643   44220 command_runner.go:130] > # CNI plugins.
	I0924 00:34:17.264647   44220 command_runner.go:130] > [crio.network]
	I0924 00:34:17.264655   44220 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0924 00:34:17.264662   44220 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0924 00:34:17.264667   44220 command_runner.go:130] > # cni_default_network = ""
	I0924 00:34:17.264674   44220 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0924 00:34:17.264681   44220 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0924 00:34:17.264687   44220 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0924 00:34:17.264693   44220 command_runner.go:130] > # plugin_dirs = [
	I0924 00:34:17.264697   44220 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0924 00:34:17.264702   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264708   44220 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0924 00:34:17.264714   44220 command_runner.go:130] > [crio.metrics]
	I0924 00:34:17.264718   44220 command_runner.go:130] > # Globally enable or disable metrics support.
	I0924 00:34:17.264725   44220 command_runner.go:130] > enable_metrics = true
	I0924 00:34:17.264729   44220 command_runner.go:130] > # Specify enabled metrics collectors.
	I0924 00:34:17.264736   44220 command_runner.go:130] > # Per default all metrics are enabled.
	I0924 00:34:17.264742   44220 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0924 00:34:17.264749   44220 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0924 00:34:17.264755   44220 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0924 00:34:17.264761   44220 command_runner.go:130] > # metrics_collectors = [
	I0924 00:34:17.264765   44220 command_runner.go:130] > # 	"operations",
	I0924 00:34:17.264771   44220 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0924 00:34:17.264775   44220 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0924 00:34:17.264781   44220 command_runner.go:130] > # 	"operations_errors",
	I0924 00:34:17.264786   44220 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0924 00:34:17.264792   44220 command_runner.go:130] > # 	"image_pulls_by_name",
	I0924 00:34:17.264796   44220 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0924 00:34:17.264802   44220 command_runner.go:130] > # 	"image_pulls_failures",
	I0924 00:34:17.264806   44220 command_runner.go:130] > # 	"image_pulls_successes",
	I0924 00:34:17.264816   44220 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0924 00:34:17.264822   44220 command_runner.go:130] > # 	"image_layer_reuse",
	I0924 00:34:17.264827   44220 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0924 00:34:17.264835   44220 command_runner.go:130] > # 	"containers_oom_total",
	I0924 00:34:17.264841   44220 command_runner.go:130] > # 	"containers_oom",
	I0924 00:34:17.264845   44220 command_runner.go:130] > # 	"processes_defunct",
	I0924 00:34:17.264851   44220 command_runner.go:130] > # 	"operations_total",
	I0924 00:34:17.264855   44220 command_runner.go:130] > # 	"operations_latency_seconds",
	I0924 00:34:17.264861   44220 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0924 00:34:17.264866   44220 command_runner.go:130] > # 	"operations_errors_total",
	I0924 00:34:17.264872   44220 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0924 00:34:17.264876   44220 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0924 00:34:17.264880   44220 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0924 00:34:17.264885   44220 command_runner.go:130] > # 	"image_pulls_success_total",
	I0924 00:34:17.264890   44220 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0924 00:34:17.264896   44220 command_runner.go:130] > # 	"containers_oom_count_total",
	I0924 00:34:17.264901   44220 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0924 00:34:17.264907   44220 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0924 00:34:17.264911   44220 command_runner.go:130] > # ]
	I0924 00:34:17.264918   44220 command_runner.go:130] > # The port on which the metrics server will listen.
	I0924 00:34:17.264923   44220 command_runner.go:130] > # metrics_port = 9090
	I0924 00:34:17.264930   44220 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0924 00:34:17.264934   44220 command_runner.go:130] > # metrics_socket = ""
	I0924 00:34:17.264941   44220 command_runner.go:130] > # The certificate for the secure metrics server.
	I0924 00:34:17.264946   44220 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0924 00:34:17.264954   44220 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0924 00:34:17.264961   44220 command_runner.go:130] > # certificate on any modification event.
	I0924 00:34:17.264965   44220 command_runner.go:130] > # metrics_cert = ""
	I0924 00:34:17.264972   44220 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0924 00:34:17.264976   44220 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0924 00:34:17.264982   44220 command_runner.go:130] > # metrics_key = ""
	I0924 00:34:17.264988   44220 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0924 00:34:17.264994   44220 command_runner.go:130] > [crio.tracing]
	I0924 00:34:17.264999   44220 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0924 00:34:17.265005   44220 command_runner.go:130] > # enable_tracing = false
	I0924 00:34:17.265011   44220 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0924 00:34:17.265017   44220 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0924 00:34:17.265025   44220 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0924 00:34:17.265035   44220 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0924 00:34:17.265041   44220 command_runner.go:130] > # CRI-O NRI configuration.
	I0924 00:34:17.265049   44220 command_runner.go:130] > [crio.nri]
	I0924 00:34:17.265059   44220 command_runner.go:130] > # Globally enable or disable NRI.
	I0924 00:34:17.265067   44220 command_runner.go:130] > # enable_nri = false
	I0924 00:34:17.265076   44220 command_runner.go:130] > # NRI socket to listen on.
	I0924 00:34:17.265085   44220 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0924 00:34:17.265095   44220 command_runner.go:130] > # NRI plugin directory to use.
	I0924 00:34:17.265102   44220 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0924 00:34:17.265110   44220 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0924 00:34:17.265117   44220 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0924 00:34:17.265122   44220 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0924 00:34:17.265129   44220 command_runner.go:130] > # nri_disable_connections = false
	I0924 00:34:17.265136   44220 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0924 00:34:17.265142   44220 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0924 00:34:17.265147   44220 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0924 00:34:17.265154   44220 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0924 00:34:17.265159   44220 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0924 00:34:17.265165   44220 command_runner.go:130] > [crio.stats]
	I0924 00:34:17.265171   44220 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0924 00:34:17.265179   44220 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0924 00:34:17.265185   44220 command_runner.go:130] > # stats_collection_period = 0
	I0924 00:34:17.265219   44220 command_runner.go:130] ! time="2024-09-24 00:34:17.225563758Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0924 00:34:17.265232   44220 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0924 00:34:17.265305   44220 cni.go:84] Creating CNI manager for ""
	I0924 00:34:17.265315   44220 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0924 00:34:17.265328   44220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:34:17.265353   44220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.199 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-246036 NodeName:multinode-246036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 00:34:17.265469   44220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-246036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:34:17.265527   44220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:34:17.275904   44220 command_runner.go:130] > kubeadm
	I0924 00:34:17.275927   44220 command_runner.go:130] > kubectl
	I0924 00:34:17.275931   44220 command_runner.go:130] > kubelet
	I0924 00:34:17.275951   44220 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:34:17.275996   44220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 00:34:17.285302   44220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0924 00:34:17.302037   44220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:34:17.317710   44220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0924 00:34:17.333734   44220 ssh_runner.go:195] Run: grep 192.168.39.199	control-plane.minikube.internal$ /etc/hosts
	I0924 00:34:17.337314   44220 command_runner.go:130] > 192.168.39.199	control-plane.minikube.internal
	I0924 00:34:17.337383   44220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:34:17.475840   44220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:34:17.490395   44220 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036 for IP: 192.168.39.199
	I0924 00:34:17.490427   44220 certs.go:194] generating shared ca certs ...
	I0924 00:34:17.490447   44220 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:34:17.490631   44220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:34:17.490688   44220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:34:17.490707   44220 certs.go:256] generating profile certs ...
	I0924 00:34:17.490804   44220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/client.key
	I0924 00:34:17.490859   44220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.key.a48aa622
	I0924 00:34:17.490892   44220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.key
	I0924 00:34:17.490905   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0924 00:34:17.490929   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0924 00:34:17.490941   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 00:34:17.490953   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0924 00:34:17.490965   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0924 00:34:17.490978   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0924 00:34:17.490991   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0924 00:34:17.491004   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0924 00:34:17.491065   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:34:17.491106   44220 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:34:17.491120   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:34:17.491152   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:34:17.491175   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:34:17.491198   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:34:17.491239   44220 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:34:17.491265   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.491279   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem -> /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.491292   44220 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.491870   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:34:17.514745   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:34:17.537642   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:34:17.561162   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:34:17.585079   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 00:34:17.609009   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 00:34:17.632030   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:34:17.654517   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/multinode-246036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:34:17.677264   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:34:17.699225   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:34:17.721521   44220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:34:17.745402   44220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:34:17.761081   44220 ssh_runner.go:195] Run: openssl version
	I0924 00:34:17.766972   44220 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0924 00:34:17.767138   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:34:17.777788   44220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.781957   44220 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.782005   44220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.782049   44220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:34:17.786916   44220 command_runner.go:130] > b5213941
	I0924 00:34:17.787303   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:34:17.796321   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:34:17.806842   44220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.810983   44220 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.811012   44220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.811048   44220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:34:17.816876   44220 command_runner.go:130] > 51391683
	I0924 00:34:17.816941   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:34:17.827775   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:34:17.839636   44220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.843714   44220 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.843863   44220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.843918   44220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:34:17.849262   44220 command_runner.go:130] > 3ec20f2e
	I0924 00:34:17.849332   44220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:34:17.858254   44220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:34:17.862102   44220 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:34:17.862128   44220 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0924 00:34:17.862135   44220 command_runner.go:130] > Device: 253,1	Inode: 531240      Links: 1
	I0924 00:34:17.862141   44220 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0924 00:34:17.862147   44220 command_runner.go:130] > Access: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862153   44220 command_runner.go:130] > Modify: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862161   44220 command_runner.go:130] > Change: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862169   44220 command_runner.go:130] >  Birth: 2024-09-24 00:27:37.876450848 +0000
	I0924 00:34:17.862252   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 00:34:17.867237   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.867370   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 00:34:17.872364   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.872421   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 00:34:17.877954   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.878058   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 00:34:17.883445   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.883580   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 00:34:17.888656   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.888710   44220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 00:34:17.893628   44220 command_runner.go:130] > Certificate will not expire
	I0924 00:34:17.893798   44220 kubeadm.go:392] StartCluster: {Name:multinode-246036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-246036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.199 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.185 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:34:17.893904   44220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 00:34:17.893961   44220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:34:17.934425   44220 command_runner.go:130] > bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec
	I0924 00:34:17.934473   44220 command_runner.go:130] > 5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a
	I0924 00:34:17.934481   44220 command_runner.go:130] > 5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a
	I0924 00:34:17.934489   44220 command_runner.go:130] > 4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9
	I0924 00:34:17.934496   44220 command_runner.go:130] > a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb
	I0924 00:34:17.934503   44220 command_runner.go:130] > f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0
	I0924 00:34:17.934512   44220 command_runner.go:130] > b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a
	I0924 00:34:17.934521   44220 command_runner.go:130] > 33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237
	I0924 00:34:17.934546   44220 cri.go:89] found id: "bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec"
	I0924 00:34:17.934559   44220 cri.go:89] found id: "5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a"
	I0924 00:34:17.934566   44220 cri.go:89] found id: "5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a"
	I0924 00:34:17.934572   44220 cri.go:89] found id: "4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9"
	I0924 00:34:17.934579   44220 cri.go:89] found id: "a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb"
	I0924 00:34:17.934586   44220 cri.go:89] found id: "f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0"
	I0924 00:34:17.934589   44220 cri.go:89] found id: "b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a"
	I0924 00:34:17.934595   44220 cri.go:89] found id: "33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237"
	I0924 00:34:17.934598   44220 cri.go:89] found id: ""
	I0924 00:34:17.934640   44220 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.940208058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138310940187251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20613da1-8c71-413c-ab69-9f3af07377b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.940998741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=986cca6d-d0b4-4c40-b5df-9c8a2a13d176 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.941065746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=986cca6d-d0b4-4c40-b5df-9c8a2a13d176 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.941436395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=986cca6d-d0b4-4c40-b5df-9c8a2a13d176 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.983376209Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=322cbb5a-137b-4387-9fab-6bd568050581 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.983469358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=322cbb5a-137b-4387-9fab-6bd568050581 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.984546115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cb4a710-1e48-4369-98de-06194051a42a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.985197606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138310985149745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cb4a710-1e48-4369-98de-06194051a42a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.985651028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3017d491-9669-4b05-a216-fb2f665a3c69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.985741806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3017d491-9669-4b05-a216-fb2f665a3c69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:30 multinode-246036 crio[2766]: time="2024-09-24 00:38:30.986073262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3017d491-9669-4b05-a216-fb2f665a3c69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.026864381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b12bf333-9c18-4c9a-9f97-9019ad36b5a7 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.026941483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b12bf333-9c18-4c9a-9f97-9019ad36b5a7 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.028148849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=198a4a31-f4d6-41f7-ac39-d94b8a5b293c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.028800301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138311028776852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=198a4a31-f4d6-41f7-ac39-d94b8a5b293c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.029513250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1890b599-2ebb-433f-9c3a-23c47b0d5abf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.029576198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1890b599-2ebb-433f-9c3a-23c47b0d5abf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.029994165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1890b599-2ebb-433f-9c3a-23c47b0d5abf name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.076298307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58ab2747-b20c-4086-af52-407519315061 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.076395507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58ab2747-b20c-4086-af52-407519315061 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.077539170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8165a90b-94b2-4922-a428-3c058e343687 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.078100668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138311078073723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8165a90b-94b2-4922-a428-3c058e343687 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.078572092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bf8318a-8c62-4597-82e0-4f0b9dbd8473 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.078627223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bf8318a-8c62-4597-82e0-4f0b9dbd8473 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:38:31 multinode-246036 crio[2766]: time="2024-09-24 00:38:31.079011014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9beea755724ad7eb373d54f59989e0f7420cd113bbf8e082fbc7e95c96d37075,PodSandboxId:12914b88b7fef15008a1ccdb4c89d948098ea859396fac42277df56f61610611,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727138098887145679,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666,PodSandboxId:4caa1af881aaba20dc884ea8b5fd8509637fc7f2cd761953a9c346dcbd21457f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727138065365368160,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d,PodSandboxId:6f97cd1430faa49e07c1a96c09253f0f51414112ce649df5a89d4d1c3e58ca6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727138065197638960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8,PodSandboxId:0a3e21cfa3565f19d05c1e2280190686916aca79c0db8835263ebf43d1ef8324,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727138065227620166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efe4ee672aebb4a392abf9d1964adcfc6e6d80c2ca31f65ae2c315fcf1cd262f,PodSandboxId:bc75ada1f9eb0a9122e67fdac1ebc7e0d5f20d69c8b9688b7891fad92ff655ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138065135897380,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0,PodSandboxId:d545fb3515575a48577ad755a143f37c7b26cbf0f55860ba27892de17335d3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727138060409896888,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161,PodSandboxId:ee56de5939931366d737f2a0f1e2d4ac348a468d94a6d444c17a9ad87ea67518,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727138060332276537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940,PodSandboxId:551e7742dc9b64e9198f2fd16c28e0e3b4312dcdda153acdb55be13b8a6d14e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727138060365949764,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916,PodSandboxId:93394c6db54349656af40af84e110bf5c50f6bcf150b6cc10281fff859c5eb19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727138060296516113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598080e596aa026359cf326ae772de4d9c204504d5666c849fc68597cc8624ff,PodSandboxId:b3b915a71dc30f1c398e490cfcfcc2130ebc6dca2ad801c32e85977173dacbc0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727137738603452263,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b5dpk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4780e514-d69a-42fe-8f9a-ee4c0fae351c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec,PodSandboxId:64fa2c553f79a15881910b60c099a7b5ccf7558c46adb88da6ececf26441c080,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727137684453046976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69257,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b6a314-9fbf-4bf9-b020-fdba57cffea0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5058772d1973616e34a3182a09e02e261c4af4678059f218039b9f253ac2867a,PodSandboxId:d9af3918ad28367a0fe5d7d927c8fbdd938be29e29736fcefdc171cdf35100e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727137684379444433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f4939b00-1847-48a6-85b9-c1d920f5617d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a,PodSandboxId:49b89dbc9cb7e874596e3ec13b61a4bbfe160ec9a57b16f834dd206a4c230aa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727137672441658469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2jt2x,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 2353cad3-6dc8-4fcd-9f70-755ebdbf3bbb,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9,PodSandboxId:6054821997e6ea5c7904abd7a93043e6148c6061a10588e419ab9897b280dfa4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727137672224006592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ncsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbb8a3b-f9bc-4ab9-bad1
-c72d2075ada4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb,PodSandboxId:5bafd4e2bf71192b2486f96d876f1d809d2f416be191657a451b6fd1191ba0eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727137661275525959,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be95db3445969924f3fca9820f3018f9,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a,PodSandboxId:bf6d66ecdda0fefa0099422937d2d8c96f1551db3e26b3cbf78a9bfbeb1a2038,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727137661225219455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2144f47b1e53721f48386515d5232a,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0,PodSandboxId:cd3ebc37a81fcd1ff272adc74ecfba43b94e16f3c4b7f72d425d74d390cd5ec5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727137661228831367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab2acadb533b40fb8b098d0f4fa0603f,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237,PodSandboxId:8b9d12c4629274fb95f288e05dabb7354c99502123c3c67cfdc1813bfdafcadc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727137661178372193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-246036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2eb25a2b0e40642b2d0d09caef02131f,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bf8318a-8c62-4597-82e0-4f0b9dbd8473 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9beea755724ad       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   12914b88b7fef       busybox-7dff88458-b5dpk
	bf020b0b565a2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   4caa1af881aab       kindnet-2jt2x
	c2b93d287ad3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   0a3e21cfa3565       coredns-7c65d6cfc9-69257
	514051851b1eb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   6f97cd1430faa       kube-proxy-4ncsm
	efe4ee672aebb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   bc75ada1f9eb0       storage-provisioner
	b45076d45479a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   d545fb3515575       etcd-multinode-246036
	01fd569a601fa       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   551e7742dc9b6       kube-apiserver-multinode-246036
	586488001f58f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   ee56de5939931       kube-scheduler-multinode-246036
	b4a4ea183a26a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   93394c6db5434       kube-controller-manager-multinode-246036
	598080e596aa0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   b3b915a71dc30       busybox-7dff88458-b5dpk
	bc559fe548fce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   64fa2c553f79a       coredns-7c65d6cfc9-69257
	5058772d19736       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   d9af3918ad283       storage-provisioner
	5b8abe628fa9e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   49b89dbc9cb7e       kindnet-2jt2x
	4a80eb915d724       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   6054821997e6e       kube-proxy-4ncsm
	a6003f3f1b636       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   5bafd4e2bf711       kube-scheduler-multinode-246036
	f1dea2a49f50c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   cd3ebc37a81fc       kube-controller-manager-multinode-246036
	b98807a030c36       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   bf6d66ecdda0f       etcd-multinode-246036
	33b18f596b4ef       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   8b9d12c462927       kube-apiserver-multinode-246036
	
	
	==> coredns [bc559fe548fce4e11effa6148efd01e8ecfcdaff0beb6a7c79ceae55c7c28cec] <==
	[INFO] 10.244.0.3:41552 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002235218s
	[INFO] 10.244.0.3:40459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009399s
	[INFO] 10.244.0.3:55802 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081863s
	[INFO] 10.244.0.3:47094 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001450217s
	[INFO] 10.244.0.3:59499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074715s
	[INFO] 10.244.0.3:51474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068892s
	[INFO] 10.244.0.3:58549 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196573s
	[INFO] 10.244.1.2:41654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169624s
	[INFO] 10.244.1.2:46021 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104158s
	[INFO] 10.244.1.2:33984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097059s
	[INFO] 10.244.1.2:53601 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145137s
	[INFO] 10.244.0.3:56408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000442975s
	[INFO] 10.244.0.3:51206 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105408s
	[INFO] 10.244.0.3:40493 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068583s
	[INFO] 10.244.0.3:38595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063891s
	[INFO] 10.244.1.2:50852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156383s
	[INFO] 10.244.1.2:44648 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00028132s
	[INFO] 10.244.1.2:42989 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017178s
	[INFO] 10.244.1.2:48496 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120554s
	[INFO] 10.244.0.3:39858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204882s
	[INFO] 10.244.0.3:49340 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000090419s
	[INFO] 10.244.0.3:34926 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078461s
	[INFO] 10.244.0.3:39068 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065707s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2b93d287ad3d88e566244261abb290fa350083890cdbb7488f7d3291df3c7c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41605 - 29420 "HINFO IN 1477722195987132737.4247350318425163224. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007398848s
	
	
	==> describe nodes <==
	Name:               multinode-246036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-246036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=multinode-246036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_27_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:27:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-246036
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:38:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:27:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:27:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:27:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:34:23 +0000   Tue, 24 Sep 2024 00:28:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    multinode-246036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f008ce8a34347c893fac80674868796
	  System UUID:                4f008ce8-a343-47c8-93fa-c80674868796
	  Boot ID:                    5fb8e198-b346-48f3-91a6-24e72e61aa1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b5dpk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 coredns-7c65d6cfc9-69257                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-246036                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-2jt2x                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-246036             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-246036    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4ncsm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-246036             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-246036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-246036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-246036 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-246036 event: Registered Node multinode-246036 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-246036 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node multinode-246036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node multinode-246036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node multinode-246036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node multinode-246036 event: Registered Node multinode-246036 in Controller
	
	
	Name:               multinode-246036-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-246036-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=multinode-246036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_24T00_35_03_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:35:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-246036-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:36:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:36:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:36:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:36:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 24 Sep 2024 00:35:33 +0000   Tue, 24 Sep 2024 00:36:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    multinode-246036-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8e4e9b6aac945b0a0a1e5898db0422f
	  System UUID:                b8e4e9b6-aac9-45b0-a0a1-e5898db0422f
	  Boot ID:                    cda88509-cb60-4e5f-aa71-84ac16bff177
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c9kq6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 kindnet-j9klb              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m59s
	  kube-system                 kube-proxy-lwpzt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m59s (x2 over 10m)    kubelet          Node multinode-246036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x2 over 10m)    kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m59s (x2 over 10m)    kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-246036-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m28s (x2 over 3m29s)  kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s (x2 over 3m29s)  kubelet          Node multinode-246036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s (x2 over 3m29s)  kubelet          Node multinode-246036-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m24s                  node-controller  Node multinode-246036-m02 event: Registered Node multinode-246036-m02 in Controller
	  Normal  NodeReady                3m9s                   kubelet          Node multinode-246036-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-246036-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053107] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.163046] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.145219] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.269851] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.822410] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.480813] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.061509] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498670] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.083407] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.548525] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[  +1.000132] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 00:28] kauditd_printk_skb: 41 callbacks suppressed
	[ +51.195758] kauditd_printk_skb: 12 callbacks suppressed
	[Sep24 00:34] systemd-fstab-generator[2638]: Ignoring "noauto" option for root device
	[  +0.162179] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.163958] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +0.139283] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.269430] systemd-fstab-generator[2705]: Ignoring "noauto" option for root device
	[  +4.392540] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.078342] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.968583] systemd-fstab-generator[2972]: Ignoring "noauto" option for root device
	[  +5.652820] kauditd_printk_skb: 74 callbacks suppressed
	[ +13.833181] systemd-fstab-generator[3794]: Ignoring "noauto" option for root device
	[  +0.096573] kauditd_printk_skb: 34 callbacks suppressed
	[ +19.838501] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [b45076d45479aab606fd896041c8a4ee90a35db4de143fcaac0c107c5e0635f0] <==
	{"level":"info","ts":"2024-09-24T00:34:21.028532Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:34:21.032749Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:34:21.034930Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"adf16ee9d395f7b5","initial-advertise-peer-urls":["https://192.168.39.199:2380"],"listen-peer-urls":["https://192.168.39.199:2380"],"advertise-client-urls":["https://192.168.39.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T00:34:21.035009Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T00:34:22.410610Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-24T00:34:22.410774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T00:34:22.410840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 received MsgPreVoteResp from adf16ee9d395f7b5 at term 2"}
	{"level":"info","ts":"2024-09-24T00:34:22.410887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.410924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 received MsgVoteResp from adf16ee9d395f7b5 at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.410951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adf16ee9d395f7b5 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.410976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: adf16ee9d395f7b5 elected leader adf16ee9d395f7b5 at term 3"}
	{"level":"info","ts":"2024-09-24T00:34:22.416038Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"adf16ee9d395f7b5","local-member-attributes":"{Name:multinode-246036 ClientURLs:[https://192.168.39.199:2379]}","request-path":"/0/members/adf16ee9d395f7b5/attributes","cluster-id":"beb078c6af941210","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T00:34:22.416065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:34:22.416298Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T00:34:22.416334Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T00:34:22.416083Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:34:22.417341Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:34:22.417376Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:34:22.418243Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.199:2379"}
	{"level":"info","ts":"2024-09-24T00:34:22.418990Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T00:35:07.355140Z","caller":"traceutil/trace.go:171","msg":"trace[1395870307] transaction","detail":"{read_only:false; response_revision:1025; number_of_response:1; }","duration":"194.280524ms","start":"2024-09-24T00:35:07.160484Z","end":"2024-09-24T00:35:07.354764Z","steps":["trace[1395870307] 'process raft request'  (duration: 194.106094ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:35:11.041014Z","caller":"traceutil/trace.go:171","msg":"trace[117307701] linearizableReadLoop","detail":"{readStateIndex:1132; appliedIndex:1131; }","duration":"108.556971ms","start":"2024-09-24T00:35:10.932443Z","end":"2024-09-24T00:35:11.041000Z","steps":["trace[117307701] 'read index received'  (duration: 108.387797ms)","trace[117307701] 'applied index is now lower than readState.Index'  (duration: 168.689µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:35:11.041229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.724982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m02\" ","response":"range_response_count:1 size:3119"}
	{"level":"info","ts":"2024-09-24T00:35:11.041298Z","caller":"traceutil/trace.go:171","msg":"trace[555372187] range","detail":"{range_begin:/registry/minions/multinode-246036-m02; range_end:; response_count:1; response_revision:1033; }","duration":"108.86359ms","start":"2024-09-24T00:35:10.932426Z","end":"2024-09-24T00:35:11.041290Z","steps":["trace[555372187] 'agreement among raft nodes before linearized reading'  (duration: 108.654954ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:35:11.041444Z","caller":"traceutil/trace.go:171","msg":"trace[1060207262] transaction","detail":"{read_only:false; response_revision:1033; number_of_response:1; }","duration":"132.086697ms","start":"2024-09-24T00:35:10.909345Z","end":"2024-09-24T00:35:11.041432Z","steps":["trace[1060207262] 'process raft request'  (duration: 131.526473ms)"],"step_count":1}
	
	
	==> etcd [b98807a030c3691ed3ff8a125e673207c63e9a99c5ea6cb8859026521ca5295a] <==
	{"level":"info","ts":"2024-09-24T00:28:41.340936Z","caller":"traceutil/trace.go:171","msg":"trace[643533056] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:494; }","duration":"144.879857ms","start":"2024-09-24T00:28:41.196046Z","end":"2024-09-24T00:28:41.340926Z","steps":["trace[643533056] 'read index received'  (duration: 8.883044ms)","trace[643533056] 'applied index is now lower than readState.Index'  (duration: 135.995852ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:28:41.341015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.959394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T00:28:41.341048Z","caller":"traceutil/trace.go:171","msg":"trace[170210422] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:476; }","duration":"145.000705ms","start":"2024-09-24T00:28:41.196042Z","end":"2024-09-24T00:28:41.341043Z","steps":["trace[170210422] 'agreement among raft nodes before linearized reading'  (duration: 144.939257ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:29:26.457500Z","caller":"traceutil/trace.go:171","msg":"trace[947301684] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:596; }","duration":"190.367914ms","start":"2024-09-24T00:29:26.267067Z","end":"2024-09-24T00:29:26.457435Z","steps":["trace[947301684] 'read index received'  (duration: 106.192873ms)","trace[947301684] 'applied index is now lower than readState.Index'  (duration: 84.173833ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T00:29:26.457655Z","caller":"traceutil/trace.go:171","msg":"trace[1669373303] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"207.152138ms","start":"2024-09-24T00:29:26.250496Z","end":"2024-09-24T00:29:26.457648Z","steps":["trace[1669373303] 'process raft request'  (duration: 122.801076ms)","trace[1669373303] 'compare'  (duration: 83.969097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:29:26.457947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.835448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T00:29:26.458005Z","caller":"traceutil/trace.go:171","msg":"trace[1951949096] range","detail":"{range_begin:/registry/minions/multinode-246036-m03; range_end:; response_count:0; response_revision:568; }","duration":"190.947215ms","start":"2024-09-24T00:29:26.267051Z","end":"2024-09-24T00:29:26.457998Z","steps":["trace[1951949096] 'agreement among raft nodes before linearized reading'  (duration: 190.765411ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T00:29:26.459161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.845962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-24T00:29:26.459644Z","caller":"traceutil/trace.go:171","msg":"trace[513377949] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:569; }","duration":"189.343952ms","start":"2024-09-24T00:29:26.270291Z","end":"2024-09-24T00:29:26.459635Z","steps":["trace[513377949] 'agreement among raft nodes before linearized reading'  (duration: 188.793829ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T00:29:33.480014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.709315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m03\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-24T00:29:33.480274Z","caller":"traceutil/trace.go:171","msg":"trace[418634358] range","detail":"{range_begin:/registry/minions/multinode-246036-m03; range_end:; response_count:1; response_revision:607; }","duration":"197.94857ms","start":"2024-09-24T00:29:33.282281Z","end":"2024-09-24T00:29:33.480229Z","steps":["trace[418634358] 'range keys from in-memory index tree'  (duration: 197.574146ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:30:20.216772Z","caller":"traceutil/trace.go:171","msg":"trace[1525947365] linearizableReadLoop","detail":"{readStateIndex:736; appliedIndex:735; }","duration":"115.610017ms","start":"2024-09-24T00:30:20.101148Z","end":"2024-09-24T00:30:20.216758Z","steps":["trace[1525947365] 'read index received'  (duration: 115.43229ms)","trace[1525947365] 'applied index is now lower than readState.Index'  (duration: 177.024µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T00:30:20.216949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.77156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-246036-m03\" ","response":"range_response_count:1 size:3119"}
	{"level":"info","ts":"2024-09-24T00:30:20.216973Z","caller":"traceutil/trace.go:171","msg":"trace[974474277] range","detail":"{range_begin:/registry/minions/multinode-246036-m03; range_end:; response_count:1; response_revision:693; }","duration":"115.839059ms","start":"2024-09-24T00:30:20.101128Z","end":"2024-09-24T00:30:20.216967Z","steps":["trace[974474277] 'agreement among raft nodes before linearized reading'  (duration: 115.713147ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:30:20.217141Z","caller":"traceutil/trace.go:171","msg":"trace[1236333319] transaction","detail":"{read_only:false; response_revision:693; number_of_response:1; }","duration":"135.277746ms","start":"2024-09-24T00:30:20.081852Z","end":"2024-09-24T00:30:20.217130Z","steps":["trace[1236333319] 'process raft request'  (duration: 134.764376ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T00:32:41.014660Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-24T00:32:41.014997Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-246036","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.199:2380"],"advertise-client-urls":["https://192.168.39.199:2379"]}
	{"level":"warn","ts":"2024-09-24T00:32:41.015138Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:32:41.015238Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:32:41.075384Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.199:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:32:41.075437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.199:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T00:32:41.075500Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"adf16ee9d395f7b5","current-leader-member-id":"adf16ee9d395f7b5"}
	{"level":"info","ts":"2024-09-24T00:32:41.080107Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:32:41.080202Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.199:2380"}
	{"level":"info","ts":"2024-09-24T00:32:41.080225Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-246036","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.199:2380"],"advertise-client-urls":["https://192.168.39.199:2379"]}
	
	
	==> kernel <==
	 00:38:31 up 11 min,  0 users,  load average: 0.18, 0.26, 0.15
	Linux multinode-246036 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5b8abe628fa9ebc296eda69551985040e5281c42345224c3b2e485657f3e6e1a] <==
	I0924 00:31:53.403635       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:03.403928       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:03.404012       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:03.404191       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:03.404216       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:03.404330       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:03.404350       1 main.go:299] handling current node
	I0924 00:32:13.412003       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:13.412218       1 main.go:299] handling current node
	I0924 00:32:13.412264       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:13.412284       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:13.412467       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:13.412498       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:23.409596       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:23.409663       1 main.go:299] handling current node
	I0924 00:32:23.409741       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:23.409751       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:23.409921       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:23.409939       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	I0924 00:32:33.406107       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:32:33.406248       1 main.go:299] handling current node
	I0924 00:32:33.406273       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:32:33.406279       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:32:33.406579       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0924 00:32:33.406600       1 main.go:322] Node multinode-246036-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bf020b0b565a293238a03d835cc1d1de694cd7752408142001e82820e77a6666] <==
	I0924 00:37:26.310462       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:37:36.316398       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:37:36.316504       1 main.go:299] handling current node
	I0924 00:37:36.316534       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:37:36.316551       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:37:46.319756       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:37:46.319804       1 main.go:299] handling current node
	I0924 00:37:46.319819       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:37:46.319825       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:37:56.310651       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:37:56.310800       1 main.go:299] handling current node
	I0924 00:37:56.310832       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:37:56.310852       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:38:06.314924       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:38:06.314985       1 main.go:299] handling current node
	I0924 00:38:06.315012       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:38:06.315020       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:38:16.310075       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:38:16.310227       1 main.go:299] handling current node
	I0924 00:38:16.310263       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:38:16.310282       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	I0924 00:38:26.310472       1 main.go:295] Handling node with IPs: map[192.168.39.199:{}]
	I0924 00:38:26.310638       1 main.go:299] handling current node
	I0924 00:38:26.310729       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0924 00:38:26.310756       1 main.go:322] Node multinode-246036-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [01fd569a601fa6172655a36f03bfec07f73116e1f6606250b55b26a0520da940] <==
	I0924 00:34:23.717420       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 00:34:23.738459       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0924 00:34:23.746105       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0924 00:34:23.747047       1 aggregator.go:171] initial CRD sync complete...
	I0924 00:34:23.747076       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 00:34:23.747084       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 00:34:23.747090       1 cache.go:39] Caches are synced for autoregister controller
	I0924 00:34:23.766387       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 00:34:23.766418       1 policy_source.go:224] refreshing policies
	I0924 00:34:23.813503       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 00:34:23.814358       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 00:34:23.814758       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 00:34:23.814878       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 00:34:23.816842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 00:34:23.819709       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0924 00:34:23.825377       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0924 00:34:23.855598       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 00:34:24.621322       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 00:34:25.923949       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 00:34:26.101906       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 00:34:26.130397       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 00:34:26.238705       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 00:34:26.257032       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 00:34:27.359565       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 00:34:27.409209       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [33b18f596b4effba4cf1fa17ae441e1bd1ab9d6738cd7313f9ba3b137bfcb237] <==
	E0924 00:29:00.850479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56668: use of closed network connection
	E0924 00:29:01.013956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56690: use of closed network connection
	E0924 00:29:01.182169       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56710: use of closed network connection
	E0924 00:29:01.343809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.199:8443->192.168.39.1:56724: use of closed network connection
	I0924 00:32:41.016518       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0924 00:32:41.028279       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028366       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028403       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028436       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028504       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028540       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028572       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.028606       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.037639       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041373       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041468       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041527       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041588       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041649       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041894       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.041971       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.042043       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.042104       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.042169       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 00:32:41.043085       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b4a4ea183a26a012d11be7880e001424832dcfdcc2ddd5299a6fe25f32de7916] <==
	E0924 00:35:41.850577       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-246036-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-246036-m03"
	E0924 00:35:41.850645       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-246036-m03': failed to patch node CIDR: Node \"multinode-246036-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0924 00:35:41.850707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:41.855904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:42.159119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:42.192172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:42.504846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:35:51.961821       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:01.476837       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:36:01.477668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:01.490319       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:02.117301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:06.220302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:06.236190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:06.681877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:36:06.681873       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:36:47.080884       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ws9d8"
	I0924 00:36:47.111085       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ws9d8"
	I0924 00:36:47.111210       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-59frq"
	I0924 00:36:47.137011       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:36:47.167148       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-59frq"
	I0924 00:36:47.194234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.608072ms"
	I0924 00:36:47.195578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.381µs"
	I0924 00:36:47.198839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:36:52.357273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	
	
	==> kube-controller-manager [f1dea2a49f50cd2690cd94ebed4ffb97ab813d4c6fb8ea59dbb02231936efba0] <==
	I0924 00:30:14.350396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:14.587132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:14.587358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:30:15.881365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:30:15.883183       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-246036-m03\" does not exist"
	I0924 00:30:15.906520       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-246036-m03" podCIDRs=["10.244.3.0/24"]
	I0924 00:30:15.906953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:15.907091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:15.922803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:16.123454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:16.444774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:26.254390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:35.459949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:35.460285       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m02"
	I0924 00:30:35.474518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:30:35.902505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:31:15.919098       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:31:15.919576       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-246036-m03"
	I0924 00:31:15.934405       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:31:15.975339       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.390271ms"
	I0924 00:31:15.976026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.987µs"
	I0924 00:31:20.978228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:31:20.999608       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	I0924 00:31:21.051269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m02"
	I0924 00:31:31.129615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-246036-m03"
	
	
	==> kube-proxy [4a80eb915d724ea9baff23a6b7094b8ae35e34bc9e96fabe4a2a99df6aea6dd9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:27:52.391392       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:27:52.402184       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.199"]
	E0924 00:27:52.402346       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:27:52.464414       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:27:52.464531       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:27:52.464568       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:27:52.467441       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:27:52.467825       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:27:52.468010       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:27:52.469326       1 config.go:199] "Starting service config controller"
	I0924 00:27:52.469516       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:27:52.469587       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:27:52.469605       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:27:52.470176       1 config.go:328] "Starting node config controller"
	I0924 00:27:52.471284       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:27:52.569665       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:27:52.569787       1 shared_informer.go:320] Caches are synced for service config
	I0924 00:27:52.571819       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [514051851b1eb7e6d20d521224f6f47d16d2212f3f25adb982f2f0b76b5de33d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:34:25.589300       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:34:25.600227       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.199"]
	E0924 00:34:25.600546       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:34:25.666259       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:34:25.666303       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:34:25.666332       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:34:25.668805       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:34:25.669255       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:34:25.669308       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:34:25.671545       1 config.go:199] "Starting service config controller"
	I0924 00:34:25.671572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:34:25.671591       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:34:25.671594       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:34:25.672142       1 config.go:328] "Starting node config controller"
	I0924 00:34:25.672168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:34:25.772155       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:34:25.772230       1 shared_informer.go:320] Caches are synced for service config
	I0924 00:34:25.773143       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [586488001f58fc62297b98b683cb2ccd93906878ca19ba6eb36d3923feb47161] <==
	I0924 00:34:21.496905       1 serving.go:386] Generated self-signed cert in-memory
	W0924 00:34:23.670133       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 00:34:23.670208       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 00:34:23.670218       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 00:34:23.670228       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 00:34:23.762483       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 00:34:23.762583       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:34:23.764783       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 00:34:23.764843       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:34:23.765070       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 00:34:23.765175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 00:34:23.868239       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a6003f3f1b6367bb96065a6243ff34bb6701840ce67df93e2feb005d548ceaeb] <==
	W0924 00:27:45.111890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 00:27:45.111943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.133194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 00:27:45.133305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.142396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 00:27:45.142486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.189928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 00:27:45.190085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.200366       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 00:27:45.200446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.242172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 00:27:45.242314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.250421       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 00:27:45.250602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.268574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 00:27:45.268719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.284142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 00:27:45.284232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:27:45.368591       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:27:45.368639       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 00:27:47.454737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:32:41.010512       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0924 00:32:41.010770       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0924 00:32:41.011066       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0924 00:32:41.019560       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 00:37:19 multinode-246036 kubelet[2979]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:37:19 multinode-246036 kubelet[2979]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:37:19 multinode-246036 kubelet[2979]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:37:19 multinode-246036 kubelet[2979]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:37:19 multinode-246036 kubelet[2979]: E0924 00:37:19.768330    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138239767847679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:19 multinode-246036 kubelet[2979]: E0924 00:37:19.768359    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138239767847679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:29 multinode-246036 kubelet[2979]: E0924 00:37:29.771746    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138249770946013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:29 multinode-246036 kubelet[2979]: E0924 00:37:29.772019    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138249770946013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:39 multinode-246036 kubelet[2979]: E0924 00:37:39.775194    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138259774149316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:39 multinode-246036 kubelet[2979]: E0924 00:37:39.775573    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138259774149316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:49 multinode-246036 kubelet[2979]: E0924 00:37:49.778198    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138269777140872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:49 multinode-246036 kubelet[2979]: E0924 00:37:49.778295    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138269777140872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:59 multinode-246036 kubelet[2979]: E0924 00:37:59.780347    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138279780073004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:37:59 multinode-246036 kubelet[2979]: E0924 00:37:59.780373    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138279780073004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:38:09 multinode-246036 kubelet[2979]: E0924 00:38:09.783381    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138289782519841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:38:09 multinode-246036 kubelet[2979]: E0924 00:38:09.783659    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138289782519841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:38:19 multinode-246036 kubelet[2979]: E0924 00:38:19.727822    2979 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 00:38:19 multinode-246036 kubelet[2979]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 00:38:19 multinode-246036 kubelet[2979]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 00:38:19 multinode-246036 kubelet[2979]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 00:38:19 multinode-246036 kubelet[2979]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 00:38:19 multinode-246036 kubelet[2979]: E0924 00:38:19.786486    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138299785603128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:38:19 multinode-246036 kubelet[2979]: E0924 00:38:19.786537    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138299785603128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:38:29 multinode-246036 kubelet[2979]: E0924 00:38:29.787980    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138309787547937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 00:38:29 multinode-246036 kubelet[2979]: E0924 00:38:29.788291    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138309787547937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:38:30.666310   46240 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19696-7623/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-246036 -n multinode-246036
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-246036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.60s)

                                                
                                    
x
+
TestPreload (184.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-660563 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0924 00:43:38.361793   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-660563 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m40.074470712s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-660563 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-660563 image pull gcr.io/k8s-minikube/busybox: (4.268571502s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-660563
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-660563: (6.623449768s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-660563 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-660563 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.434194673s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-660563 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-24 00:45:18.676084678 +0000 UTC m=+4060.107167138
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-660563 -n test-preload-660563
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-660563 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-660563 logs -n 25: (1.03628253s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036 sudo cat                                       | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m03_multinode-246036.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt                       | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m02:/home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n                                                                 | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | multinode-246036-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-246036 ssh -n multinode-246036-m02 sudo cat                                   | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-246036 node stop m03                                                          | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:29 UTC |
	| node    | multinode-246036 node start                                                             | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:29 UTC | 24 Sep 24 00:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-246036                                                                | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:30 UTC |                     |
	| stop    | -p multinode-246036                                                                     | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:30 UTC |                     |
	| start   | -p multinode-246036                                                                     | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:32 UTC | 24 Sep 24 00:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-246036                                                                | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:36 UTC |                     |
	| node    | multinode-246036 node delete                                                            | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:36 UTC | 24 Sep 24 00:36 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-246036 stop                                                                   | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:36 UTC |                     |
	| start   | -p multinode-246036                                                                     | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:38 UTC | 24 Sep 24 00:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-246036                                                                | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:41 UTC |                     |
	| start   | -p multinode-246036-m02                                                                 | multinode-246036-m02 | jenkins | v1.34.0 | 24 Sep 24 00:41 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-246036-m03                                                                 | multinode-246036-m03 | jenkins | v1.34.0 | 24 Sep 24 00:41 UTC | 24 Sep 24 00:42 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-246036                                                                 | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:42 UTC |                     |
	| delete  | -p multinode-246036-m03                                                                 | multinode-246036-m03 | jenkins | v1.34.0 | 24 Sep 24 00:42 UTC | 24 Sep 24 00:42 UTC |
	| delete  | -p multinode-246036                                                                     | multinode-246036     | jenkins | v1.34.0 | 24 Sep 24 00:42 UTC | 24 Sep 24 00:42 UTC |
	| start   | -p test-preload-660563                                                                  | test-preload-660563  | jenkins | v1.34.0 | 24 Sep 24 00:42 UTC | 24 Sep 24 00:43 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-660563 image pull                                                          | test-preload-660563  | jenkins | v1.34.0 | 24 Sep 24 00:43 UTC | 24 Sep 24 00:44 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-660563                                                                  | test-preload-660563  | jenkins | v1.34.0 | 24 Sep 24 00:44 UTC | 24 Sep 24 00:44 UTC |
	| start   | -p test-preload-660563                                                                  | test-preload-660563  | jenkins | v1.34.0 | 24 Sep 24 00:44 UTC | 24 Sep 24 00:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-660563 image list                                                          | test-preload-660563  | jenkins | v1.34.0 | 24 Sep 24 00:45 UTC | 24 Sep 24 00:45 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:44:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:44:08.072908   48595 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:44:08.073027   48595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:44:08.073038   48595 out.go:358] Setting ErrFile to fd 2...
	I0924 00:44:08.073044   48595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:44:08.073253   48595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:44:08.073841   48595 out.go:352] Setting JSON to false
	I0924 00:44:08.074756   48595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5192,"bootTime":1727133456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:44:08.074854   48595 start.go:139] virtualization: kvm guest
	I0924 00:44:08.077333   48595 out.go:177] * [test-preload-660563] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:44:08.078824   48595 notify.go:220] Checking for updates...
	I0924 00:44:08.078848   48595 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:44:08.080608   48595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:44:08.082041   48595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:44:08.083349   48595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:44:08.084698   48595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:44:08.085822   48595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:44:08.087304   48595 config.go:182] Loaded profile config "test-preload-660563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0924 00:44:08.087722   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:44:08.087795   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:44:08.103021   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0924 00:44:08.103587   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:44:08.104192   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:44:08.104226   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:44:08.104660   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:44:08.104880   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:08.106743   48595 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 00:44:08.107963   48595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:44:08.108320   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:44:08.108386   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:44:08.123592   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0924 00:44:08.124108   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:44:08.124658   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:44:08.124685   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:44:08.124993   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:44:08.125168   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:08.161074   48595 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 00:44:08.162480   48595 start.go:297] selected driver: kvm2
	I0924 00:44:08.162500   48595 start.go:901] validating driver "kvm2" against &{Name:test-preload-660563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-660563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:44:08.162608   48595 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:44:08.163292   48595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:44:08.163396   48595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:44:08.179188   48595 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:44:08.179588   48595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:44:08.179619   48595 cni.go:84] Creating CNI manager for ""
	I0924 00:44:08.179677   48595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:44:08.179743   48595 start.go:340] cluster config:
	{Name:test-preload-660563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-660563 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:44:08.179854   48595 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:44:08.181614   48595 out.go:177] * Starting "test-preload-660563" primary control-plane node in "test-preload-660563" cluster
	I0924 00:44:08.182728   48595 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0924 00:44:08.283914   48595 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0924 00:44:08.283960   48595 cache.go:56] Caching tarball of preloaded images
	I0924 00:44:08.284126   48595 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0924 00:44:08.286176   48595 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0924 00:44:08.287640   48595 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0924 00:44:08.389779   48595 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0924 00:44:22.133373   48595 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0924 00:44:22.133487   48595 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0924 00:44:22.978341   48595 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0924 00:44:22.978477   48595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/config.json ...
	I0924 00:44:22.978719   48595 start.go:360] acquireMachinesLock for test-preload-660563: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:44:22.978787   48595 start.go:364] duration metric: took 45.437µs to acquireMachinesLock for "test-preload-660563"
	I0924 00:44:22.978808   48595 start.go:96] Skipping create...Using existing machine configuration
	I0924 00:44:22.978819   48595 fix.go:54] fixHost starting: 
	I0924 00:44:22.979117   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:44:22.979160   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:44:22.994320   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35477
	I0924 00:44:22.994829   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:44:22.995357   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:44:22.995388   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:44:22.995729   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:44:22.995965   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:22.996157   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetState
	I0924 00:44:22.998096   48595 fix.go:112] recreateIfNeeded on test-preload-660563: state=Stopped err=<nil>
	I0924 00:44:22.998128   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	W0924 00:44:22.998292   48595 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 00:44:23.000417   48595 out.go:177] * Restarting existing kvm2 VM for "test-preload-660563" ...
	I0924 00:44:23.001720   48595 main.go:141] libmachine: (test-preload-660563) Calling .Start
	I0924 00:44:23.001922   48595 main.go:141] libmachine: (test-preload-660563) Ensuring networks are active...
	I0924 00:44:23.002843   48595 main.go:141] libmachine: (test-preload-660563) Ensuring network default is active
	I0924 00:44:23.003196   48595 main.go:141] libmachine: (test-preload-660563) Ensuring network mk-test-preload-660563 is active
	I0924 00:44:23.003748   48595 main.go:141] libmachine: (test-preload-660563) Getting domain xml...
	I0924 00:44:23.004573   48595 main.go:141] libmachine: (test-preload-660563) Creating domain...
	I0924 00:44:24.242445   48595 main.go:141] libmachine: (test-preload-660563) Waiting to get IP...
	I0924 00:44:24.243468   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:24.243844   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:24.243907   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:24.243828   48680 retry.go:31] will retry after 253.95162ms: waiting for machine to come up
	I0924 00:44:24.499489   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:24.499972   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:24.499999   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:24.499919   48680 retry.go:31] will retry after 293.667441ms: waiting for machine to come up
	I0924 00:44:24.795494   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:24.796004   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:24.796027   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:24.795962   48680 retry.go:31] will retry after 461.285463ms: waiting for machine to come up
	I0924 00:44:25.258588   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:25.259090   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:25.259116   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:25.259043   48680 retry.go:31] will retry after 601.713689ms: waiting for machine to come up
	I0924 00:44:25.862631   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:25.863131   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:25.863161   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:25.863054   48680 retry.go:31] will retry after 599.523487ms: waiting for machine to come up
	I0924 00:44:26.463858   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:26.464311   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:26.464375   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:26.464261   48680 retry.go:31] will retry after 727.402943ms: waiting for machine to come up
	I0924 00:44:27.193276   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:27.193600   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:27.193628   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:27.193553   48680 retry.go:31] will retry after 775.050139ms: waiting for machine to come up
	I0924 00:44:27.970397   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:27.970884   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:27.970910   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:27.970838   48680 retry.go:31] will retry after 1.468352255s: waiting for machine to come up
	I0924 00:44:29.440908   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:29.441410   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:29.441433   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:29.441359   48680 retry.go:31] will retry after 1.316067381s: waiting for machine to come up
	I0924 00:44:30.760078   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:30.760543   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:30.760572   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:30.760492   48680 retry.go:31] will retry after 2.028577443s: waiting for machine to come up
	I0924 00:44:32.791719   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:32.792151   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:32.792176   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:32.792093   48680 retry.go:31] will retry after 2.886097391s: waiting for machine to come up
	I0924 00:44:35.681689   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:35.682129   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:35.682152   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:35.682115   48680 retry.go:31] will retry after 2.272364645s: waiting for machine to come up
	I0924 00:44:37.957527   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:37.957823   48595 main.go:141] libmachine: (test-preload-660563) DBG | unable to find current IP address of domain test-preload-660563 in network mk-test-preload-660563
	I0924 00:44:37.957852   48595 main.go:141] libmachine: (test-preload-660563) DBG | I0924 00:44:37.957786   48680 retry.go:31] will retry after 3.659015988s: waiting for machine to come up
	I0924 00:44:41.620904   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.621403   48595 main.go:141] libmachine: (test-preload-660563) Found IP for machine: 192.168.39.238
	I0924 00:44:41.621419   48595 main.go:141] libmachine: (test-preload-660563) Reserving static IP address...
	I0924 00:44:41.621434   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has current primary IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.621936   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "test-preload-660563", mac: "52:54:00:d6:9d:ad", ip: "192.168.39.238"} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:41.621956   48595 main.go:141] libmachine: (test-preload-660563) DBG | skip adding static IP to network mk-test-preload-660563 - found existing host DHCP lease matching {name: "test-preload-660563", mac: "52:54:00:d6:9d:ad", ip: "192.168.39.238"}
	I0924 00:44:41.621966   48595 main.go:141] libmachine: (test-preload-660563) Reserved static IP address: 192.168.39.238
	I0924 00:44:41.621975   48595 main.go:141] libmachine: (test-preload-660563) Waiting for SSH to be available...
	I0924 00:44:41.621983   48595 main.go:141] libmachine: (test-preload-660563) DBG | Getting to WaitForSSH function...
	I0924 00:44:41.623898   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.624221   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:41.624255   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.624278   48595 main.go:141] libmachine: (test-preload-660563) DBG | Using SSH client type: external
	I0924 00:44:41.624362   48595 main.go:141] libmachine: (test-preload-660563) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa (-rw-------)
	I0924 00:44:41.624395   48595 main.go:141] libmachine: (test-preload-660563) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:44:41.624409   48595 main.go:141] libmachine: (test-preload-660563) DBG | About to run SSH command:
	I0924 00:44:41.624418   48595 main.go:141] libmachine: (test-preload-660563) DBG | exit 0
	I0924 00:44:41.748227   48595 main.go:141] libmachine: (test-preload-660563) DBG | SSH cmd err, output: <nil>: 
	I0924 00:44:41.748617   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetConfigRaw
	I0924 00:44:41.749199   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetIP
	I0924 00:44:41.751570   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.751956   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:41.752004   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.752230   48595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/config.json ...
	I0924 00:44:41.752482   48595 machine.go:93] provisionDockerMachine start ...
	I0924 00:44:41.752502   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:41.752705   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:41.754894   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.755229   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:41.755269   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.755471   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:41.755648   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:41.755936   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:41.756077   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:41.756229   48595 main.go:141] libmachine: Using SSH client type: native
	I0924 00:44:41.756490   48595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0924 00:44:41.756504   48595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 00:44:41.860440   48595 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 00:44:41.860474   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetMachineName
	I0924 00:44:41.860733   48595 buildroot.go:166] provisioning hostname "test-preload-660563"
	I0924 00:44:41.860764   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetMachineName
	I0924 00:44:41.860990   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:41.863528   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.863891   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:41.863920   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.864038   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:41.864212   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:41.864384   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:41.864521   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:41.864662   48595 main.go:141] libmachine: Using SSH client type: native
	I0924 00:44:41.864869   48595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0924 00:44:41.864882   48595 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-660563 && echo "test-preload-660563" | sudo tee /etc/hostname
	I0924 00:44:41.987141   48595 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-660563
	
	I0924 00:44:41.987178   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:41.990443   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.990749   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:41.990781   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:41.990930   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:41.991158   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:41.991356   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:41.991499   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:41.991686   48595 main.go:141] libmachine: Using SSH client type: native
	I0924 00:44:41.991908   48595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0924 00:44:41.991933   48595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-660563' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-660563/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-660563' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:44:42.108829   48595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:44:42.108864   48595 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:44:42.108915   48595 buildroot.go:174] setting up certificates
	I0924 00:44:42.108928   48595 provision.go:84] configureAuth start
	I0924 00:44:42.108946   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetMachineName
	I0924 00:44:42.109242   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetIP
	I0924 00:44:42.111946   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.112323   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:42.112380   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.112491   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:42.114506   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.114854   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:42.114882   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.114972   48595 provision.go:143] copyHostCerts
	I0924 00:44:42.115022   48595 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:44:42.115038   48595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:44:42.115115   48595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:44:42.115200   48595 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:44:42.115207   48595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:44:42.115230   48595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:44:42.115285   48595 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:44:42.115293   48595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:44:42.115313   48595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:44:42.115413   48595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.test-preload-660563 san=[127.0.0.1 192.168.39.238 localhost minikube test-preload-660563]
	I0924 00:44:42.449812   48595 provision.go:177] copyRemoteCerts
	I0924 00:44:42.449878   48595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:44:42.449902   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:42.452978   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.453335   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:42.453368   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.453521   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:42.453802   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:42.453976   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:42.454151   48595 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa Username:docker}
	I0924 00:44:42.538819   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:44:42.565865   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:44:42.592638   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 00:44:42.618545   48595 provision.go:87] duration metric: took 509.602313ms to configureAuth
	I0924 00:44:42.618573   48595 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:44:42.618731   48595 config.go:182] Loaded profile config "test-preload-660563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0924 00:44:42.618792   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:42.621393   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.621776   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:42.621802   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.621961   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:42.622151   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:42.622344   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:42.622486   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:42.622650   48595 main.go:141] libmachine: Using SSH client type: native
	I0924 00:44:42.622803   48595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0924 00:44:42.622818   48595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:44:42.838093   48595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:44:42.838129   48595 machine.go:96] duration metric: took 1.085631341s to provisionDockerMachine
	I0924 00:44:42.838149   48595 start.go:293] postStartSetup for "test-preload-660563" (driver="kvm2")
	I0924 00:44:42.838164   48595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:44:42.838188   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:42.838525   48595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:44:42.838558   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:42.841321   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.841684   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:42.841707   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.841816   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:42.841980   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:42.842169   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:42.842289   48595 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa Username:docker}
	I0924 00:44:42.927233   48595 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:44:42.931943   48595 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:44:42.931974   48595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:44:42.932039   48595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:44:42.932134   48595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:44:42.932225   48595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:44:42.942561   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:44:42.967532   48595 start.go:296] duration metric: took 129.364649ms for postStartSetup
	I0924 00:44:42.967578   48595 fix.go:56] duration metric: took 19.988759909s for fixHost
	I0924 00:44:42.967598   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:42.970425   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.970760   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:42.970793   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:42.971032   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:42.971238   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:42.971449   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:42.971583   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:42.971727   48595 main.go:141] libmachine: Using SSH client type: native
	I0924 00:44:42.971896   48595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0924 00:44:42.971907   48595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:44:43.080958   48595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727138683.050359629
	
	I0924 00:44:43.080982   48595 fix.go:216] guest clock: 1727138683.050359629
	I0924 00:44:43.080992   48595 fix.go:229] Guest: 2024-09-24 00:44:43.050359629 +0000 UTC Remote: 2024-09-24 00:44:42.967582009 +0000 UTC m=+34.930819130 (delta=82.77762ms)
	I0924 00:44:43.081015   48595 fix.go:200] guest clock delta is within tolerance: 82.77762ms
	I0924 00:44:43.081021   48595 start.go:83] releasing machines lock for "test-preload-660563", held for 20.102221317s
	I0924 00:44:43.081044   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:43.081303   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetIP
	I0924 00:44:43.084127   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:43.084460   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:43.084495   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:43.084670   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:43.085092   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:43.085255   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:44:43.085359   48595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:44:43.085398   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:43.085439   48595 ssh_runner.go:195] Run: cat /version.json
	I0924 00:44:43.085458   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:44:43.091373   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:43.091400   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:43.091734   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:43.091762   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:43.091788   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:43.091804   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:43.091994   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:43.092109   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:44:43.092187   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:43.092248   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:44:43.092310   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:43.092388   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:44:43.092439   48595 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa Username:docker}
	I0924 00:44:43.092545   48595 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa Username:docker}
	I0924 00:44:43.204981   48595 ssh_runner.go:195] Run: systemctl --version
	I0924 00:44:43.210786   48595 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:44:43.350884   48595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:44:43.356823   48595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:44:43.356885   48595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:44:43.372896   48595 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:44:43.372920   48595 start.go:495] detecting cgroup driver to use...
	I0924 00:44:43.372976   48595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:44:43.388612   48595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:44:43.402214   48595 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:44:43.402270   48595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:44:43.415779   48595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:44:43.429758   48595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:44:43.541873   48595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:44:43.691751   48595 docker.go:233] disabling docker service ...
	I0924 00:44:43.691820   48595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:44:43.705850   48595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:44:43.718607   48595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:44:43.828435   48595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:44:43.941896   48595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:44:43.956463   48595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:44:43.981175   48595 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0924 00:44:43.981230   48595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:44:43.992278   48595 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:44:43.992361   48595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:44:44.003826   48595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:44:44.014744   48595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:44:44.025896   48595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:44:44.037476   48595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:44:44.047646   48595 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:44:44.064406   48595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:44:44.075514   48595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:44:44.085738   48595 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:44:44.085800   48595 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:44:44.099790   48595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:44:44.109477   48595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:44:44.226082   48595 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:44:44.313607   48595 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:44:44.313690   48595 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:44:44.318675   48595 start.go:563] Will wait 60s for crictl version
	I0924 00:44:44.318742   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:44.322311   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:44:44.362691   48595 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:44:44.362775   48595 ssh_runner.go:195] Run: crio --version
	I0924 00:44:44.390715   48595 ssh_runner.go:195] Run: crio --version
	I0924 00:44:44.419362   48595 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0924 00:44:44.420953   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetIP
	I0924 00:44:44.423480   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:44.423848   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:44:44.423883   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:44:44.424079   48595 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:44:44.428714   48595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:44:44.441370   48595 kubeadm.go:883] updating cluster {Name:test-preload-660563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-660563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:44:44.441484   48595 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0924 00:44:44.441527   48595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:44:44.476472   48595 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0924 00:44:44.476534   48595 ssh_runner.go:195] Run: which lz4
	I0924 00:44:44.480653   48595 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 00:44:44.484554   48595 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 00:44:44.484585   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0924 00:44:45.842426   48595 crio.go:462] duration metric: took 1.361815278s to copy over tarball
	I0924 00:44:45.842497   48595 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 00:44:48.196779   48595 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.354253041s)
	I0924 00:44:48.196809   48595 crio.go:469] duration metric: took 2.354352302s to extract the tarball
	I0924 00:44:48.196823   48595 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 00:44:48.237116   48595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:44:48.278324   48595 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0924 00:44:48.278353   48595 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 00:44:48.278407   48595 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:44:48.278435   48595 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 00:44:48.278472   48595 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 00:44:48.278498   48595 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 00:44:48.278556   48595 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0924 00:44:48.278584   48595 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0924 00:44:48.278652   48595 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 00:44:48.278648   48595 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 00:44:48.279876   48595 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0924 00:44:48.279919   48595 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0924 00:44:48.279924   48595 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:44:48.279900   48595 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 00:44:48.279876   48595 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 00:44:48.279980   48595 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 00:44:48.280043   48595 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 00:44:48.279900   48595 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 00:44:48.506337   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0924 00:44:48.546159   48595 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0924 00:44:48.546198   48595 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 00:44:48.546234   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:48.550067   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0924 00:44:48.570327   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0924 00:44:48.582523   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0924 00:44:48.599083   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0924 00:44:48.600445   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0924 00:44:48.614991   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0924 00:44:48.624032   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 00:44:48.649107   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0924 00:44:48.663030   48595 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0924 00:44:48.663083   48595 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 00:44:48.663121   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0924 00:44:48.663150   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:48.699394   48595 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0924 00:44:48.699439   48595 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 00:44:48.699481   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:48.729854   48595 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0924 00:44:48.729898   48595 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0924 00:44:48.729938   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:48.746699   48595 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0924 00:44:48.746741   48595 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 00:44:48.746765   48595 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0924 00:44:48.746785   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:48.746795   48595 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 00:44:48.746834   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:48.761327   48595 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0924 00:44:48.761375   48595 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0924 00:44:48.761417   48595 ssh_runner.go:195] Run: which crictl
	I0924 00:44:48.761438   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 00:44:48.776681   48595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0924 00:44:48.776783   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0924 00:44:48.776825   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0924 00:44:48.776785   48595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0924 00:44:48.776887   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 00:44:48.776975   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0924 00:44:48.852125   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0924 00:44:48.852166   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 00:44:48.866041   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0924 00:44:48.903318   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0924 00:44:48.903358   48595 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0924 00:44:48.903377   48595 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0924 00:44:48.903417   48595 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0924 00:44:48.910537   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 00:44:48.910602   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0924 00:44:48.959715   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0924 00:44:48.997037   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0924 00:44:48.997042   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 00:44:49.055865   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0924 00:44:49.534437   48595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:44:52.211317   48595 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.307883301s)
	I0924 00:44:52.211349   48595 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0924 00:44:52.211428   48595 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (3.300867807s)
	I0924 00:44:52.211489   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 00:44:52.211573   48595 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.300947792s)
	I0924 00:44:52.211635   48595 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.251891558s)
	I0924 00:44:52.211692   48595 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.214570783s)
	I0924 00:44:52.211725   48595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0924 00:44:52.211693   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0924 00:44:52.211800   48595 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.21472983s)
	I0924 00:44:52.211810   48595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0924 00:44:52.211645   48595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0924 00:44:52.211843   48595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0924 00:44:52.211907   48595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0924 00:44:52.281543   48595 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.225632358s)
	I0924 00:44:52.281603   48595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0924 00:44:52.281618   48595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0924 00:44:52.281674   48595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0924 00:44:52.281703   48595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0924 00:44:52.281713   48595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0924 00:44:52.281723   48595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0924 00:44:52.281554   48595 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.747088809s)
	I0924 00:44:52.281680   48595 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0924 00:44:52.281763   48595 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0924 00:44:52.281764   48595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0924 00:44:52.281751   48595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0924 00:44:52.281704   48595 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0924 00:44:52.281801   48595 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0924 00:44:52.290090   48595 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0924 00:44:52.631987   48595 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0924 00:44:52.632033   48595 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0924 00:44:52.632084   48595 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0924 00:44:52.632133   48595 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0924 00:44:52.632193   48595 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0924 00:44:52.632234   48595 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0924 00:44:53.074466   48595 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0924 00:44:53.074524   48595 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0924 00:44:53.074585   48595 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0924 00:44:53.815595   48595 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0924 00:44:53.815646   48595 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0924 00:44:53.815711   48595 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0924 00:44:53.962881   48595 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0924 00:44:53.962931   48595 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0924 00:44:53.962972   48595 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0924 00:44:56.209378   48595 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.24637969s)
	I0924 00:44:56.209411   48595 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0924 00:44:56.209454   48595 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0924 00:44:56.209536   48595 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0924 00:44:56.953808   48595 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0924 00:44:56.953870   48595 cache_images.go:123] Successfully loaded all cached images
	I0924 00:44:56.953877   48595 cache_images.go:92] duration metric: took 8.675509973s to LoadCachedImages
	I0924 00:44:56.953888   48595 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.24.4 crio true true} ...
	I0924 00:44:56.954016   48595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-660563 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-660563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:44:56.954111   48595 ssh_runner.go:195] Run: crio config
	I0924 00:44:57.001420   48595 cni.go:84] Creating CNI manager for ""
	I0924 00:44:57.001442   48595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:44:57.001453   48595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:44:57.001476   48595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-660563 NodeName:test-preload-660563 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 00:44:57.001633   48595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-660563"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:44:57.001721   48595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0924 00:44:57.012078   48595 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:44:57.012156   48595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 00:44:57.021176   48595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0924 00:44:57.037961   48595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:44:57.053787   48595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0924 00:44:57.071483   48595 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0924 00:44:57.075172   48595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:44:57.087439   48595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:44:57.202443   48595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:44:57.218851   48595 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563 for IP: 192.168.39.238
	I0924 00:44:57.218875   48595 certs.go:194] generating shared ca certs ...
	I0924 00:44:57.218895   48595 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:44:57.219097   48595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:44:57.219164   48595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:44:57.219177   48595 certs.go:256] generating profile certs ...
	I0924 00:44:57.219298   48595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/client.key
	I0924 00:44:57.219375   48595 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/apiserver.key.b29d795b
	I0924 00:44:57.219423   48595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/proxy-client.key
	I0924 00:44:57.219581   48595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:44:57.219612   48595 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:44:57.219625   48595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:44:57.219666   48595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:44:57.219702   48595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:44:57.219743   48595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:44:57.219821   48595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:44:57.220523   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:44:57.254417   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:44:57.286121   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:44:57.321623   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:44:57.348889   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 00:44:57.373886   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:44:57.401408   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:44:57.440776   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 00:44:57.465177   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:44:57.488374   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:44:57.511930   48595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:44:57.534082   48595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:44:57.549922   48595 ssh_runner.go:195] Run: openssl version
	I0924 00:44:57.555358   48595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:44:57.565299   48595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:44:57.569670   48595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:44:57.569754   48595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:44:57.575783   48595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:44:57.588271   48595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:44:57.599214   48595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:44:57.603817   48595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:44:57.603876   48595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:44:57.609619   48595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:44:57.619705   48595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:44:57.629663   48595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:44:57.633939   48595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:44:57.633985   48595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:44:57.639236   48595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:44:57.648908   48595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:44:57.653020   48595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 00:44:57.658725   48595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 00:44:57.664297   48595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 00:44:57.670014   48595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 00:44:57.675564   48595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 00:44:57.681013   48595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 00:44:57.686798   48595 kubeadm.go:392] StartCluster: {Name:test-preload-660563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-660563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:44:57.686877   48595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 00:44:57.686921   48595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:44:57.723367   48595 cri.go:89] found id: ""
	I0924 00:44:57.723465   48595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 00:44:57.733144   48595 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 00:44:57.733162   48595 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 00:44:57.733204   48595 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 00:44:57.742079   48595 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 00:44:57.742484   48595 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-660563" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:44:57.742592   48595 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-660563" cluster setting kubeconfig missing "test-preload-660563" context setting]
	I0924 00:44:57.742890   48595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:44:57.743440   48595 kapi.go:59] client config for test-preload-660563: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 00:44:57.743974   48595 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 00:44:57.752733   48595 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0924 00:44:57.752760   48595 kubeadm.go:1160] stopping kube-system containers ...
	I0924 00:44:57.752773   48595 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 00:44:57.752822   48595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:44:57.786514   48595 cri.go:89] found id: ""
	I0924 00:44:57.786618   48595 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 00:44:57.803139   48595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 00:44:57.813010   48595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 00:44:57.813041   48595 kubeadm.go:157] found existing configuration files:
	
	I0924 00:44:57.813098   48595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 00:44:57.822285   48595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 00:44:57.822468   48595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 00:44:57.831650   48595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 00:44:57.840213   48595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 00:44:57.840291   48595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 00:44:57.849216   48595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 00:44:57.858407   48595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 00:44:57.858473   48595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 00:44:57.867606   48595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 00:44:57.876227   48595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 00:44:57.876320   48595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 00:44:57.885112   48595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 00:44:57.894165   48595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 00:44:57.983473   48595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 00:44:58.578021   48595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 00:44:58.846593   48595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 00:44:58.920171   48595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 00:44:59.004792   48595 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:44:59.004890   48595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:44:59.505547   48595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:45:00.005277   48595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:45:00.024215   48595 api_server.go:72] duration metric: took 1.019420217s to wait for apiserver process to appear ...
	I0924 00:45:00.024244   48595 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:45:00.024269   48595 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0924 00:45:00.024844   48595 api_server.go:269] stopped: https://192.168.39.238:8443/healthz: Get "https://192.168.39.238:8443/healthz": dial tcp 192.168.39.238:8443: connect: connection refused
	I0924 00:45:00.524731   48595 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0924 00:45:03.689899   48595 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 00:45:03.689927   48595 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 00:45:03.689941   48595 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0924 00:45:03.729088   48595 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 00:45:03.729118   48595 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 00:45:04.024467   48595 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0924 00:45:04.030978   48595 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 00:45:04.031013   48595 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 00:45:04.524541   48595 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0924 00:45:04.530163   48595 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 00:45:04.530200   48595 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 00:45:05.024700   48595 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0924 00:45:05.031040   48595 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0924 00:45:05.038965   48595 api_server.go:141] control plane version: v1.24.4
	I0924 00:45:05.038996   48595 api_server.go:131] duration metric: took 5.014744486s to wait for apiserver health ...
	I0924 00:45:05.039004   48595 cni.go:84] Creating CNI manager for ""
	I0924 00:45:05.039010   48595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:45:05.041034   48595 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 00:45:05.042578   48595 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 00:45:05.058939   48595 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 00:45:05.078597   48595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:45:05.078682   48595 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 00:45:05.078705   48595 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 00:45:05.089238   48595 system_pods.go:59] 7 kube-system pods found
	I0924 00:45:05.089279   48595 system_pods.go:61] "coredns-6d4b75cb6d-jmtpf" [cfde4b18-d29f-40fb-ba3c-b1eda7029248] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 00:45:05.089290   48595 system_pods.go:61] "etcd-test-preload-660563" [c2b05ef0-d864-4531-82de-a06a04a82c5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 00:45:05.089297   48595 system_pods.go:61] "kube-apiserver-test-preload-660563" [ab942075-a205-461c-9549-beec259317d1] Running
	I0924 00:45:05.089302   48595 system_pods.go:61] "kube-controller-manager-test-preload-660563" [0d175056-2945-4ae6-aeee-c03b3c511eab] Running
	I0924 00:45:05.089305   48595 system_pods.go:61] "kube-proxy-x4jgx" [38108400-0645-407d-a9b3-9713c82117a4] Running
	I0924 00:45:05.089308   48595 system_pods.go:61] "kube-scheduler-test-preload-660563" [1cd22991-ed38-44c0-b7d7-73b87636f3a5] Running
	I0924 00:45:05.089312   48595 system_pods.go:61] "storage-provisioner" [e401801d-729b-45ca-94a1-89467ad83c17] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 00:45:05.089318   48595 system_pods.go:74] duration metric: took 10.697961ms to wait for pod list to return data ...
	I0924 00:45:05.089330   48595 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:45:05.092526   48595 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:45:05.092551   48595 node_conditions.go:123] node cpu capacity is 2
	I0924 00:45:05.092567   48595 node_conditions.go:105] duration metric: took 3.223073ms to run NodePressure ...
	I0924 00:45:05.092598   48595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 00:45:05.405996   48595 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 00:45:05.413280   48595 kubeadm.go:739] kubelet initialised
	I0924 00:45:05.413309   48595 kubeadm.go:740] duration metric: took 7.284206ms waiting for restarted kubelet to initialise ...
	I0924 00:45:05.413319   48595 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:45:05.421487   48595 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-jmtpf" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:05.431742   48595 pod_ready.go:98] node "test-preload-660563" hosting pod "coredns-6d4b75cb6d-jmtpf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.431769   48595 pod_ready.go:82] duration metric: took 10.251953ms for pod "coredns-6d4b75cb6d-jmtpf" in "kube-system" namespace to be "Ready" ...
	E0924 00:45:05.431778   48595 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-660563" hosting pod "coredns-6d4b75cb6d-jmtpf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.431785   48595 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:05.444225   48595 pod_ready.go:98] node "test-preload-660563" hosting pod "etcd-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.444253   48595 pod_ready.go:82] duration metric: took 12.458637ms for pod "etcd-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	E0924 00:45:05.444284   48595 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-660563" hosting pod "etcd-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.444292   48595 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:05.453982   48595 pod_ready.go:98] node "test-preload-660563" hosting pod "kube-apiserver-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.454014   48595 pod_ready.go:82] duration metric: took 9.714132ms for pod "kube-apiserver-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	E0924 00:45:05.454023   48595 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-660563" hosting pod "kube-apiserver-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.454029   48595 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:05.484443   48595 pod_ready.go:98] node "test-preload-660563" hosting pod "kube-controller-manager-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.484473   48595 pod_ready.go:82] duration metric: took 30.435118ms for pod "kube-controller-manager-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	E0924 00:45:05.484488   48595 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-660563" hosting pod "kube-controller-manager-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.484495   48595 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x4jgx" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:05.883332   48595 pod_ready.go:98] node "test-preload-660563" hosting pod "kube-proxy-x4jgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.883360   48595 pod_ready.go:82] duration metric: took 398.857442ms for pod "kube-proxy-x4jgx" in "kube-system" namespace to be "Ready" ...
	E0924 00:45:05.883369   48595 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-660563" hosting pod "kube-proxy-x4jgx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:05.883375   48595 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:06.282546   48595 pod_ready.go:98] node "test-preload-660563" hosting pod "kube-scheduler-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:06.282580   48595 pod_ready.go:82] duration metric: took 399.198682ms for pod "kube-scheduler-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	E0924 00:45:06.282592   48595 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-660563" hosting pod "kube-scheduler-test-preload-660563" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:06.282601   48595 pod_ready.go:39] duration metric: took 869.264606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:45:06.282622   48595 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 00:45:06.293930   48595 ops.go:34] apiserver oom_adj: -16
	I0924 00:45:06.293952   48595 kubeadm.go:597] duration metric: took 8.560784546s to restartPrimaryControlPlane
	I0924 00:45:06.293962   48595 kubeadm.go:394] duration metric: took 8.607170051s to StartCluster
	I0924 00:45:06.293982   48595 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:45:06.294064   48595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:45:06.294690   48595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:45:06.294945   48595 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:45:06.295024   48595 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 00:45:06.295129   48595 addons.go:69] Setting storage-provisioner=true in profile "test-preload-660563"
	I0924 00:45:06.295142   48595 addons.go:69] Setting default-storageclass=true in profile "test-preload-660563"
	I0924 00:45:06.295154   48595 addons.go:234] Setting addon storage-provisioner=true in "test-preload-660563"
	W0924 00:45:06.295162   48595 addons.go:243] addon storage-provisioner should already be in state true
	I0924 00:45:06.295163   48595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-660563"
	I0924 00:45:06.295162   48595 config.go:182] Loaded profile config "test-preload-660563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0924 00:45:06.295191   48595 host.go:66] Checking if "test-preload-660563" exists ...
	I0924 00:45:06.295476   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:45:06.295510   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:45:06.295574   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:45:06.295615   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:45:06.296809   48595 out.go:177] * Verifying Kubernetes components...
	I0924 00:45:06.298387   48595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:45:06.310802   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39725
	I0924 00:45:06.311437   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:45:06.312029   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:45:06.312059   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:45:06.312463   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:45:06.312709   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
	I0924 00:45:06.313054   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:45:06.313080   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:45:06.313092   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:45:06.313535   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:45:06.313559   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:45:06.313906   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:45:06.314146   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetState
	I0924 00:45:06.316353   48595 kapi.go:59] client config for test-preload-660563: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/test-preload-660563/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 00:45:06.316734   48595 addons.go:234] Setting addon default-storageclass=true in "test-preload-660563"
	W0924 00:45:06.316759   48595 addons.go:243] addon default-storageclass should already be in state true
	I0924 00:45:06.316787   48595 host.go:66] Checking if "test-preload-660563" exists ...
	I0924 00:45:06.317160   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:45:06.317226   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:45:06.331491   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36099
	I0924 00:45:06.331967   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:45:06.332524   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:45:06.332551   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:45:06.332910   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:45:06.333175   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0924 00:45:06.333582   48595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:45:06.333609   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:45:06.333623   48595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:45:06.334101   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:45:06.334123   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:45:06.334456   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:45:06.334720   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetState
	I0924 00:45:06.336882   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:45:06.338946   48595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:45:06.340397   48595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:45:06.340415   48595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 00:45:06.340431   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:45:06.343646   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:45:06.344062   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:45:06.344087   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:45:06.344233   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:45:06.344436   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:45:06.344586   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:45:06.344733   48595 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa Username:docker}
	I0924 00:45:06.385567   48595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44469
	I0924 00:45:06.386093   48595 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:45:06.386578   48595 main.go:141] libmachine: Using API Version  1
	I0924 00:45:06.386616   48595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:45:06.386967   48595 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:45:06.387222   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetState
	I0924 00:45:06.389198   48595 main.go:141] libmachine: (test-preload-660563) Calling .DriverName
	I0924 00:45:06.389458   48595 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 00:45:06.389473   48595 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 00:45:06.389487   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHHostname
	I0924 00:45:06.392510   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:45:06.392931   48595 main.go:141] libmachine: (test-preload-660563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:9d:ad", ip: ""} in network mk-test-preload-660563: {Iface:virbr1 ExpiryTime:2024-09-24 01:44:33 +0000 UTC Type:0 Mac:52:54:00:d6:9d:ad Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-660563 Clientid:01:52:54:00:d6:9d:ad}
	I0924 00:45:06.392961   48595 main.go:141] libmachine: (test-preload-660563) DBG | domain test-preload-660563 has defined IP address 192.168.39.238 and MAC address 52:54:00:d6:9d:ad in network mk-test-preload-660563
	I0924 00:45:06.393150   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHPort
	I0924 00:45:06.393343   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHKeyPath
	I0924 00:45:06.393516   48595 main.go:141] libmachine: (test-preload-660563) Calling .GetSSHUsername
	I0924 00:45:06.393664   48595 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/test-preload-660563/id_rsa Username:docker}
	I0924 00:45:06.462465   48595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:45:06.479131   48595 node_ready.go:35] waiting up to 6m0s for node "test-preload-660563" to be "Ready" ...
	I0924 00:45:06.574470   48595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:45:06.603093   48595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 00:45:07.531334   48595 main.go:141] libmachine: Making call to close driver server
	I0924 00:45:07.531362   48595 main.go:141] libmachine: (test-preload-660563) Calling .Close
	I0924 00:45:07.531372   48595 main.go:141] libmachine: Making call to close driver server
	I0924 00:45:07.531391   48595 main.go:141] libmachine: (test-preload-660563) Calling .Close
	I0924 00:45:07.531645   48595 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:45:07.531678   48595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:45:07.531692   48595 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:45:07.531698   48595 main.go:141] libmachine: Making call to close driver server
	I0924 00:45:07.531704   48595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:45:07.531714   48595 main.go:141] libmachine: Making call to close driver server
	I0924 00:45:07.531726   48595 main.go:141] libmachine: (test-preload-660563) Calling .Close
	I0924 00:45:07.531705   48595 main.go:141] libmachine: (test-preload-660563) Calling .Close
	I0924 00:45:07.532061   48595 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:45:07.532081   48595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:45:07.532084   48595 main.go:141] libmachine: (test-preload-660563) DBG | Closing plugin on server side
	I0924 00:45:07.532112   48595 main.go:141] libmachine: (test-preload-660563) DBG | Closing plugin on server side
	I0924 00:45:07.532061   48595 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:45:07.532172   48595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:45:07.540041   48595 main.go:141] libmachine: Making call to close driver server
	I0924 00:45:07.540068   48595 main.go:141] libmachine: (test-preload-660563) Calling .Close
	I0924 00:45:07.540377   48595 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:45:07.540428   48595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:45:07.540410   48595 main.go:141] libmachine: (test-preload-660563) DBG | Closing plugin on server side
	I0924 00:45:07.543796   48595 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 00:45:07.545693   48595 addons.go:510] duration metric: took 1.250675762s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 00:45:08.483346   48595 node_ready.go:53] node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:10.983575   48595 node_ready.go:53] node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:13.483789   48595 node_ready.go:53] node "test-preload-660563" has status "Ready":"False"
	I0924 00:45:13.983811   48595 node_ready.go:49] node "test-preload-660563" has status "Ready":"True"
	I0924 00:45:13.983838   48595 node_ready.go:38] duration metric: took 7.504675292s for node "test-preload-660563" to be "Ready" ...
	I0924 00:45:13.983846   48595 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:45:13.988590   48595 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-jmtpf" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:13.993951   48595 pod_ready.go:93] pod "coredns-6d4b75cb6d-jmtpf" in "kube-system" namespace has status "Ready":"True"
	I0924 00:45:13.993972   48595 pod_ready.go:82] duration metric: took 5.355843ms for pod "coredns-6d4b75cb6d-jmtpf" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:13.993986   48595 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.000016   48595 pod_ready.go:93] pod "etcd-test-preload-660563" in "kube-system" namespace has status "Ready":"True"
	I0924 00:45:14.000035   48595 pod_ready.go:82] duration metric: took 6.043648ms for pod "etcd-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.000043   48595 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.506299   48595 pod_ready.go:93] pod "kube-apiserver-test-preload-660563" in "kube-system" namespace has status "Ready":"True"
	I0924 00:45:14.506325   48595 pod_ready.go:82] duration metric: took 506.275804ms for pod "kube-apiserver-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.506335   48595 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.510629   48595 pod_ready.go:93] pod "kube-controller-manager-test-preload-660563" in "kube-system" namespace has status "Ready":"True"
	I0924 00:45:14.510649   48595 pod_ready.go:82] duration metric: took 4.308428ms for pod "kube-controller-manager-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.510658   48595 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x4jgx" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.783839   48595 pod_ready.go:93] pod "kube-proxy-x4jgx" in "kube-system" namespace has status "Ready":"True"
	I0924 00:45:14.783864   48595 pod_ready.go:82] duration metric: took 273.199635ms for pod "kube-proxy-x4jgx" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:14.783873   48595 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:16.789770   48595 pod_ready.go:103] pod "kube-scheduler-test-preload-660563" in "kube-system" namespace has status "Ready":"False"
	I0924 00:45:17.790207   48595 pod_ready.go:93] pod "kube-scheduler-test-preload-660563" in "kube-system" namespace has status "Ready":"True"
	I0924 00:45:17.790240   48595 pod_ready.go:82] duration metric: took 3.006359844s for pod "kube-scheduler-test-preload-660563" in "kube-system" namespace to be "Ready" ...
	I0924 00:45:17.790271   48595 pod_ready.go:39] duration metric: took 3.806416386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:45:17.790287   48595 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:45:17.790349   48595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:45:17.805710   48595 api_server.go:72] duration metric: took 11.510736073s to wait for apiserver process to appear ...
	I0924 00:45:17.805746   48595 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:45:17.805764   48595 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0924 00:45:17.812679   48595 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0924 00:45:17.813611   48595 api_server.go:141] control plane version: v1.24.4
	I0924 00:45:17.813633   48595 api_server.go:131] duration metric: took 7.881193ms to wait for apiserver health ...
	I0924 00:45:17.813641   48595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:45:17.818550   48595 system_pods.go:59] 7 kube-system pods found
	I0924 00:45:17.818579   48595 system_pods.go:61] "coredns-6d4b75cb6d-jmtpf" [cfde4b18-d29f-40fb-ba3c-b1eda7029248] Running
	I0924 00:45:17.818585   48595 system_pods.go:61] "etcd-test-preload-660563" [c2b05ef0-d864-4531-82de-a06a04a82c5b] Running
	I0924 00:45:17.818588   48595 system_pods.go:61] "kube-apiserver-test-preload-660563" [ab942075-a205-461c-9549-beec259317d1] Running
	I0924 00:45:17.818593   48595 system_pods.go:61] "kube-controller-manager-test-preload-660563" [0d175056-2945-4ae6-aeee-c03b3c511eab] Running
	I0924 00:45:17.818597   48595 system_pods.go:61] "kube-proxy-x4jgx" [38108400-0645-407d-a9b3-9713c82117a4] Running
	I0924 00:45:17.818600   48595 system_pods.go:61] "kube-scheduler-test-preload-660563" [1cd22991-ed38-44c0-b7d7-73b87636f3a5] Running
	I0924 00:45:17.818604   48595 system_pods.go:61] "storage-provisioner" [e401801d-729b-45ca-94a1-89467ad83c17] Running
	I0924 00:45:17.818610   48595 system_pods.go:74] duration metric: took 4.96309ms to wait for pod list to return data ...
	I0924 00:45:17.818617   48595 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:45:17.984582   48595 default_sa.go:45] found service account: "default"
	I0924 00:45:17.984612   48595 default_sa.go:55] duration metric: took 165.988208ms for default service account to be created ...
	I0924 00:45:17.984621   48595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:45:18.187401   48595 system_pods.go:86] 7 kube-system pods found
	I0924 00:45:18.187431   48595 system_pods.go:89] "coredns-6d4b75cb6d-jmtpf" [cfde4b18-d29f-40fb-ba3c-b1eda7029248] Running
	I0924 00:45:18.187439   48595 system_pods.go:89] "etcd-test-preload-660563" [c2b05ef0-d864-4531-82de-a06a04a82c5b] Running
	I0924 00:45:18.187443   48595 system_pods.go:89] "kube-apiserver-test-preload-660563" [ab942075-a205-461c-9549-beec259317d1] Running
	I0924 00:45:18.187447   48595 system_pods.go:89] "kube-controller-manager-test-preload-660563" [0d175056-2945-4ae6-aeee-c03b3c511eab] Running
	I0924 00:45:18.187450   48595 system_pods.go:89] "kube-proxy-x4jgx" [38108400-0645-407d-a9b3-9713c82117a4] Running
	I0924 00:45:18.187454   48595 system_pods.go:89] "kube-scheduler-test-preload-660563" [1cd22991-ed38-44c0-b7d7-73b87636f3a5] Running
	I0924 00:45:18.187457   48595 system_pods.go:89] "storage-provisioner" [e401801d-729b-45ca-94a1-89467ad83c17] Running
	I0924 00:45:18.187463   48595 system_pods.go:126] duration metric: took 202.837985ms to wait for k8s-apps to be running ...
	I0924 00:45:18.187469   48595 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:45:18.187509   48595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:45:18.202076   48595 system_svc.go:56] duration metric: took 14.598802ms WaitForService to wait for kubelet
	I0924 00:45:18.202104   48595 kubeadm.go:582] duration metric: took 11.907134453s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:45:18.202118   48595 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:45:18.383840   48595 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:45:18.383867   48595 node_conditions.go:123] node cpu capacity is 2
	I0924 00:45:18.383876   48595 node_conditions.go:105] duration metric: took 181.753932ms to run NodePressure ...
	I0924 00:45:18.383886   48595 start.go:241] waiting for startup goroutines ...
	I0924 00:45:18.383893   48595 start.go:246] waiting for cluster config update ...
	I0924 00:45:18.383903   48595 start.go:255] writing updated cluster config ...
	I0924 00:45:18.384150   48595 ssh_runner.go:195] Run: rm -f paused
	I0924 00:45:18.434845   48595 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0924 00:45:18.436781   48595 out.go:201] 
	W0924 00:45:18.438123   48595 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0924 00:45:18.439353   48595 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0924 00:45:18.440539   48595 out.go:177] * Done! kubectl is now configured to use "test-preload-660563" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.310541201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138719310514748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e39ec6d4-1779-40f2-bd74-4b64a677f4c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.311181394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd5a85d9-f6be-448e-8e75-49ee24e93c7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.311237494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd5a85d9-f6be-448e-8e75-49ee24e93c7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.311419256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e12d3e41d7fb284c62755fc01e751baf1087ef0ebc79aa243f0430a8b954d,PodSandboxId:0421e915b2f6a94d2fb025523c2d61166f019f66999b2d6d0672faf3bbbd5ed3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727138712069450191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmtpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfde4b18-d29f-40fb-ba3c-b1eda7029248,},Annotations:map[string]string{io.kubernetes.container.hash: 116030f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ec17cd16e13f45e1e3a28b136abe73c0e535d66a2642eec3d31f3a83be5cb2,PodSandboxId:40e2cdd05791523d33ca5af73304c3e9b02f3329e96feb389e5fba27548e65e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138705341126646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e401801d-729b-45ca-94a1-89467ad83c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb88c06,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ae0dcd9a0c7cd79513924c29b8ca00b19874929f9a16452f00bbf45e0897ed,PodSandboxId:7407b94ca16dc5f308c77bb1e725e841b5a2f031c8f7db7bddc4a4a8ee305206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727138704956458593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4jgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
108400-0645-407d-a9b3-9713c82117a4,},Annotations:map[string]string{io.kubernetes.container.hash: 27c81436,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14b0d094ce7bd5996c08f17e9e9dc493a96536bdf656f7da70cad2afb7b9a38,PodSandboxId:053ece549b18c6e35089d9408b454c98c5d79b55e59e54937b3a44008124f496,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727138699713301255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4b116c5fe86f4c6d11de80f4d1355e,},Anno
tations:map[string]string{io.kubernetes.container.hash: a8305ca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2526744c81ca5d932e24c38bc80e5b0d72756b617b2bb7fbd8b862af7254ca8,PodSandboxId:7637f3d009ed7252dc5a34d45d2812136d8709d270aab5924b51842f879abf32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727138699713134557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7172301868ebb5cce2e3475f
1b852375,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0f648be725c2e68138a7663c15f932013d1e93ab15a3287abad6c5b8b6b16b,PodSandboxId:248c602c84767f127ec9de7c312ee3a2834eee4684701bfa8eb0f4e7e397d2f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727138699704441033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d6817e640ccf7583bcd26a58b515cb,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98927572a8cff31046bd2e08beaaa3ea37e21a336771d468491762f4d13d724a,PodSandboxId:ddd78c57f8491cf78f95a9d0d7f2c2f242687c8de91d75dd199ffa4cbf82be9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727138699658995704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce721f004156cb85612573767c2ce555,},Annotations
:map[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd5a85d9-f6be-448e-8e75-49ee24e93c7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.349481263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ad9b420-72b5-4263-b107-45f48fb19be3 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.349570714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ad9b420-72b5-4263-b107-45f48fb19be3 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.350940337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e14acfd4-c571-491c-a673-b066395d079b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.351528779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138719351502677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e14acfd4-c571-491c-a673-b066395d079b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.352222769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=557970ac-4b28-421b-bb9b-2700c8fa0359 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.352296490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=557970ac-4b28-421b-bb9b-2700c8fa0359 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.352482637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e12d3e41d7fb284c62755fc01e751baf1087ef0ebc79aa243f0430a8b954d,PodSandboxId:0421e915b2f6a94d2fb025523c2d61166f019f66999b2d6d0672faf3bbbd5ed3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727138712069450191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmtpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfde4b18-d29f-40fb-ba3c-b1eda7029248,},Annotations:map[string]string{io.kubernetes.container.hash: 116030f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ec17cd16e13f45e1e3a28b136abe73c0e535d66a2642eec3d31f3a83be5cb2,PodSandboxId:40e2cdd05791523d33ca5af73304c3e9b02f3329e96feb389e5fba27548e65e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138705341126646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e401801d-729b-45ca-94a1-89467ad83c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb88c06,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ae0dcd9a0c7cd79513924c29b8ca00b19874929f9a16452f00bbf45e0897ed,PodSandboxId:7407b94ca16dc5f308c77bb1e725e841b5a2f031c8f7db7bddc4a4a8ee305206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727138704956458593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4jgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
108400-0645-407d-a9b3-9713c82117a4,},Annotations:map[string]string{io.kubernetes.container.hash: 27c81436,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14b0d094ce7bd5996c08f17e9e9dc493a96536bdf656f7da70cad2afb7b9a38,PodSandboxId:053ece549b18c6e35089d9408b454c98c5d79b55e59e54937b3a44008124f496,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727138699713301255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4b116c5fe86f4c6d11de80f4d1355e,},Anno
tations:map[string]string{io.kubernetes.container.hash: a8305ca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2526744c81ca5d932e24c38bc80e5b0d72756b617b2bb7fbd8b862af7254ca8,PodSandboxId:7637f3d009ed7252dc5a34d45d2812136d8709d270aab5924b51842f879abf32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727138699713134557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7172301868ebb5cce2e3475f
1b852375,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0f648be725c2e68138a7663c15f932013d1e93ab15a3287abad6c5b8b6b16b,PodSandboxId:248c602c84767f127ec9de7c312ee3a2834eee4684701bfa8eb0f4e7e397d2f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727138699704441033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d6817e640ccf7583bcd26a58b515cb,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98927572a8cff31046bd2e08beaaa3ea37e21a336771d468491762f4d13d724a,PodSandboxId:ddd78c57f8491cf78f95a9d0d7f2c2f242687c8de91d75dd199ffa4cbf82be9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727138699658995704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce721f004156cb85612573767c2ce555,},Annotations
:map[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=557970ac-4b28-421b-bb9b-2700c8fa0359 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.387826495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6010799-f82c-4d25-ba0c-02c42dd54a39 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.387900754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6010799-f82c-4d25-ba0c-02c42dd54a39 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.389250808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=181f303d-9a1c-46b3-a2ac-763c395b30d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.389714030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138719389689734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=181f303d-9a1c-46b3-a2ac-763c395b30d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.390292345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae51b2f9-4382-4924-93c6-800d6aa48352 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.390373688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae51b2f9-4382-4924-93c6-800d6aa48352 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.390589285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e12d3e41d7fb284c62755fc01e751baf1087ef0ebc79aa243f0430a8b954d,PodSandboxId:0421e915b2f6a94d2fb025523c2d61166f019f66999b2d6d0672faf3bbbd5ed3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727138712069450191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmtpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfde4b18-d29f-40fb-ba3c-b1eda7029248,},Annotations:map[string]string{io.kubernetes.container.hash: 116030f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ec17cd16e13f45e1e3a28b136abe73c0e535d66a2642eec3d31f3a83be5cb2,PodSandboxId:40e2cdd05791523d33ca5af73304c3e9b02f3329e96feb389e5fba27548e65e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138705341126646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e401801d-729b-45ca-94a1-89467ad83c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb88c06,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ae0dcd9a0c7cd79513924c29b8ca00b19874929f9a16452f00bbf45e0897ed,PodSandboxId:7407b94ca16dc5f308c77bb1e725e841b5a2f031c8f7db7bddc4a4a8ee305206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727138704956458593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4jgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
108400-0645-407d-a9b3-9713c82117a4,},Annotations:map[string]string{io.kubernetes.container.hash: 27c81436,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14b0d094ce7bd5996c08f17e9e9dc493a96536bdf656f7da70cad2afb7b9a38,PodSandboxId:053ece549b18c6e35089d9408b454c98c5d79b55e59e54937b3a44008124f496,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727138699713301255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4b116c5fe86f4c6d11de80f4d1355e,},Anno
tations:map[string]string{io.kubernetes.container.hash: a8305ca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2526744c81ca5d932e24c38bc80e5b0d72756b617b2bb7fbd8b862af7254ca8,PodSandboxId:7637f3d009ed7252dc5a34d45d2812136d8709d270aab5924b51842f879abf32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727138699713134557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7172301868ebb5cce2e3475f
1b852375,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0f648be725c2e68138a7663c15f932013d1e93ab15a3287abad6c5b8b6b16b,PodSandboxId:248c602c84767f127ec9de7c312ee3a2834eee4684701bfa8eb0f4e7e397d2f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727138699704441033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d6817e640ccf7583bcd26a58b515cb,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98927572a8cff31046bd2e08beaaa3ea37e21a336771d468491762f4d13d724a,PodSandboxId:ddd78c57f8491cf78f95a9d0d7f2c2f242687c8de91d75dd199ffa4cbf82be9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727138699658995704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce721f004156cb85612573767c2ce555,},Annotations
:map[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae51b2f9-4382-4924-93c6-800d6aa48352 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.422599207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5f33cbe-a9fa-43f2-af02-7d8331fb8f58 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.422873033Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5f33cbe-a9fa-43f2-af02-7d8331fb8f58 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.423910751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=738129bf-0370-468b-b82e-2bd3c593bf5d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.424455946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727138719424430115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=738129bf-0370-468b-b82e-2bd3c593bf5d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.425281450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d688d95-fd09-41ab-b788-f71bdf086623 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.425344117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d688d95-fd09-41ab-b788-f71bdf086623 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:45:19 test-preload-660563 crio[664]: time="2024-09-24 00:45:19.425527506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e12d3e41d7fb284c62755fc01e751baf1087ef0ebc79aa243f0430a8b954d,PodSandboxId:0421e915b2f6a94d2fb025523c2d61166f019f66999b2d6d0672faf3bbbd5ed3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727138712069450191,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jmtpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfde4b18-d29f-40fb-ba3c-b1eda7029248,},Annotations:map[string]string{io.kubernetes.container.hash: 116030f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ec17cd16e13f45e1e3a28b136abe73c0e535d66a2642eec3d31f3a83be5cb2,PodSandboxId:40e2cdd05791523d33ca5af73304c3e9b02f3329e96feb389e5fba27548e65e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727138705341126646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e401801d-729b-45ca-94a1-89467ad83c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb88c06,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ae0dcd9a0c7cd79513924c29b8ca00b19874929f9a16452f00bbf45e0897ed,PodSandboxId:7407b94ca16dc5f308c77bb1e725e841b5a2f031c8f7db7bddc4a4a8ee305206,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727138704956458593,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x4jgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38
108400-0645-407d-a9b3-9713c82117a4,},Annotations:map[string]string{io.kubernetes.container.hash: 27c81436,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14b0d094ce7bd5996c08f17e9e9dc493a96536bdf656f7da70cad2afb7b9a38,PodSandboxId:053ece549b18c6e35089d9408b454c98c5d79b55e59e54937b3a44008124f496,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727138699713301255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea4b116c5fe86f4c6d11de80f4d1355e,},Anno
tations:map[string]string{io.kubernetes.container.hash: a8305ca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2526744c81ca5d932e24c38bc80e5b0d72756b617b2bb7fbd8b862af7254ca8,PodSandboxId:7637f3d009ed7252dc5a34d45d2812136d8709d270aab5924b51842f879abf32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727138699713134557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7172301868ebb5cce2e3475f
1b852375,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0f648be725c2e68138a7663c15f932013d1e93ab15a3287abad6c5b8b6b16b,PodSandboxId:248c602c84767f127ec9de7c312ee3a2834eee4684701bfa8eb0f4e7e397d2f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727138699704441033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94d6817e640ccf7583bcd26a58b515cb,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98927572a8cff31046bd2e08beaaa3ea37e21a336771d468491762f4d13d724a,PodSandboxId:ddd78c57f8491cf78f95a9d0d7f2c2f242687c8de91d75dd199ffa4cbf82be9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727138699658995704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-660563,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce721f004156cb85612573767c2ce555,},Annotations
:map[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d688d95-fd09-41ab-b788-f71bdf086623 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a70e12d3e41d7       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   0421e915b2f6a       coredns-6d4b75cb6d-jmtpf
	19ec17cd16e13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   40e2cdd057915       storage-provisioner
	86ae0dcd9a0c7       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   7407b94ca16dc       kube-proxy-x4jgx
	d14b0d094ce7b       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   053ece549b18c       etcd-test-preload-660563
	f2526744c81ca       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   7637f3d009ed7       kube-controller-manager-test-preload-660563
	3b0f648be725c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   248c602c84767       kube-scheduler-test-preload-660563
	98927572a8cff       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   ddd78c57f8491       kube-apiserver-test-preload-660563
	
	
	==> coredns [a70e12d3e41d7fb284c62755fc01e751baf1087ef0ebc79aa243f0430a8b954d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:60148 - 60761 "HINFO IN 4256119118535767904.6376376023160490994. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011772056s
	
	
	==> describe nodes <==
	Name:               test-preload-660563
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-660563
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=test-preload-660563
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_43_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:43:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-660563
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:45:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:45:13 +0000   Tue, 24 Sep 2024 00:43:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:45:13 +0000   Tue, 24 Sep 2024 00:43:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:45:13 +0000   Tue, 24 Sep 2024 00:43:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:45:13 +0000   Tue, 24 Sep 2024 00:45:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    test-preload-660563
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49c6a2792d8b47438606d973fd163172
	  System UUID:                49c6a279-2d8b-4743-8606-d973fd163172
	  Boot ID:                    3e7a4a08-728e-4a91-b05c-894a8be7638e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-jmtpf                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     98s
	  kube-system                 etcd-test-preload-660563                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         112s
	  kube-system                 kube-apiserver-test-preload-660563             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-test-preload-660563    200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-x4jgx                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-test-preload-660563             100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  118s (x5 over 118s)  kubelet          Node test-preload-660563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x5 over 118s)  kubelet          Node test-preload-660563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x5 over 118s)  kubelet          Node test-preload-660563 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  110s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node test-preload-660563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node test-preload-660563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node test-preload-660563 status is now: NodeHasSufficientPID
	  Normal  NodeReady                100s                 kubelet          Node test-preload-660563 status is now: NodeReady
	  Normal  RegisteredNode           98s                  node-controller  Node test-preload-660563 event: Registered Node test-preload-660563 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node test-preload-660563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node test-preload-660563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node test-preload-660563 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-660563 event: Registered Node test-preload-660563 in Controller
	
	
	==> dmesg <==
	[Sep24 00:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051263] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039178] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.791297] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.982543] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.583883] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.731922] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.063209] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056967] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.174631] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.115219] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.280540] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +12.979904] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.053343] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.569052] systemd-fstab-generator[1118]: Ignoring "noauto" option for root device
	[Sep24 00:45] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.294474] systemd-fstab-generator[1751]: Ignoring "noauto" option for root device
	[  +5.542601] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d14b0d094ce7bd5996c08f17e9e9dc493a96536bdf656f7da70cad2afb7b9a38] <==
	{"level":"info","ts":"2024-09-24T00:45:00.134Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"fff3906243738b90","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-24T00:45:00.140Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-24T00:45:00.141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 switched to configuration voters=(18443243650725153680)"}
	{"level":"info","ts":"2024-09-24T00:45:00.141Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T00:45:00.141Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","added-peer-id":"fff3906243738b90","added-peer-peer-urls":["https://192.168.39.238:2380"]}
	{"level":"info","ts":"2024-09-24T00:45:00.141Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:45:00.141Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fff3906243738b90","initial-advertise-peer-urls":["https://192.168.39.238:2380"],"listen-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T00:45:00.142Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T00:45:00.142Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-09-24T00:45:00.144Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-09-24T00:45:00.145Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:45:01.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-24T00:45:01.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T00:45:01.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 2"}
	{"level":"info","ts":"2024-09-24T00:45:01.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T00:45:01.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-09-24T00:45:01.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T00:45:01.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fff3906243738b90 elected leader fff3906243738b90 at term 3"}
	{"level":"info","ts":"2024-09-24T00:45:01.103Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"fff3906243738b90","local-member-attributes":"{Name:test-preload-660563 ClientURLs:[https://192.168.39.238:2379]}","request-path":"/0/members/fff3906243738b90/attributes","cluster-id":"3658928c14b8a733","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T00:45:01.103Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:45:01.105Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:45:01.106Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.238:2379"}
	{"level":"info","ts":"2024-09-24T00:45:01.108Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T00:45:01.138Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T00:45:01.138Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:45:19 up 0 min,  0 users,  load average: 1.76, 0.44, 0.15
	Linux test-preload-660563 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [98927572a8cff31046bd2e08beaaa3ea37e21a336771d468491762f4d13d724a] <==
	I0924 00:45:03.631725       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0924 00:45:03.631816       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0924 00:45:03.645573       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0924 00:45:03.645638       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0924 00:45:03.645704       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 00:45:03.658489       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 00:45:03.798925       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0924 00:45:03.807204       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0924 00:45:03.823130       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 00:45:03.844655       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0924 00:45:03.847419       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0924 00:45:03.847575       1 cache.go:39] Caches are synced for autoregister controller
	I0924 00:45:03.848726       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0924 00:45:03.848774       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0924 00:45:03.849773       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0924 00:45:04.331129       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0924 00:45:04.633869       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 00:45:05.244179       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0924 00:45:05.258167       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0924 00:45:05.313737       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0924 00:45:05.334651       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0924 00:45:05.366564       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 00:45:05.378076       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 00:45:16.177806       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 00:45:16.236571       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f2526744c81ca5d932e24c38bc80e5b0d72756b617b2bb7fbd8b862af7254ca8] <==
	I0924 00:45:16.161862       1 shared_informer.go:262] Caches are synced for expand
	I0924 00:45:16.165196       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0924 00:45:16.173536       1 shared_informer.go:262] Caches are synced for HPA
	I0924 00:45:16.186188       1 shared_informer.go:262] Caches are synced for resource quota
	I0924 00:45:16.202952       1 shared_informer.go:262] Caches are synced for persistent volume
	I0924 00:45:16.208736       1 shared_informer.go:262] Caches are synced for attach detach
	I0924 00:45:16.209032       1 shared_informer.go:262] Caches are synced for disruption
	I0924 00:45:16.209069       1 disruption.go:371] Sending events to api server.
	I0924 00:45:16.215308       1 shared_informer.go:262] Caches are synced for ephemeral
	I0924 00:45:16.215357       1 shared_informer.go:262] Caches are synced for endpoint
	I0924 00:45:16.217639       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0924 00:45:16.219229       1 shared_informer.go:262] Caches are synced for deployment
	I0924 00:45:16.223366       1 shared_informer.go:262] Caches are synced for daemon sets
	I0924 00:45:16.228720       1 shared_informer.go:262] Caches are synced for taint
	I0924 00:45:16.228942       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0924 00:45:16.229224       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-660563. Assuming now as a timestamp.
	I0924 00:45:16.229281       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0924 00:45:16.229915       1 shared_informer.go:262] Caches are synced for job
	I0924 00:45:16.230060       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0924 00:45:16.233207       1 shared_informer.go:262] Caches are synced for GC
	I0924 00:45:16.234268       1 event.go:294] "Event occurred" object="test-preload-660563" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-660563 event: Registered Node test-preload-660563 in Controller"
	I0924 00:45:16.262800       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0924 00:45:16.730531       1 shared_informer.go:262] Caches are synced for garbage collector
	I0924 00:45:16.745308       1 shared_informer.go:262] Caches are synced for garbage collector
	I0924 00:45:16.745364       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [86ae0dcd9a0c7cd79513924c29b8ca00b19874929f9a16452f00bbf45e0897ed] <==
	I0924 00:45:05.200309       1 node.go:163] Successfully retrieved node IP: 192.168.39.238
	I0924 00:45:05.200384       1 server_others.go:138] "Detected node IP" address="192.168.39.238"
	I0924 00:45:05.200441       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0924 00:45:05.309958       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0924 00:45:05.309986       1 server_others.go:206] "Using iptables Proxier"
	I0924 00:45:05.310286       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0924 00:45:05.314068       1 server.go:661] "Version info" version="v1.24.4"
	I0924 00:45:05.319645       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:45:05.323849       1 config.go:317] "Starting service config controller"
	I0924 00:45:05.324269       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0924 00:45:05.324314       1 config.go:226] "Starting endpoint slice config controller"
	I0924 00:45:05.324320       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0924 00:45:05.325949       1 config.go:444] "Starting node config controller"
	I0924 00:45:05.325960       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0924 00:45:05.425366       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0924 00:45:05.425406       1 shared_informer.go:262] Caches are synced for service config
	I0924 00:45:05.426148       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [3b0f648be725c2e68138a7663c15f932013d1e93ab15a3287abad6c5b8b6b16b] <==
	I0924 00:45:00.889664       1 serving.go:348] Generated self-signed cert in-memory
	W0924 00:45:03.690852       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 00:45:03.690949       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 00:45:03.690979       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 00:45:03.691003       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 00:45:03.758357       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0924 00:45:03.758458       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:45:03.772270       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 00:45:03.772714       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:45:03.772617       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0924 00:45:03.772690       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0924 00:45:03.873186       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 00:45:03 test-preload-660563 kubelet[1125]: I0924 00:45:03.947292    1125 apiserver.go:52] "Watching apiserver"
	Sep 24 00:45:03 test-preload-660563 kubelet[1125]: I0924 00:45:03.950623    1125 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 00:45:03 test-preload-660563 kubelet[1125]: I0924 00:45:03.950823    1125 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 00:45:03 test-preload-660563 kubelet[1125]: I0924 00:45:03.950962    1125 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 00:45:03 test-preload-660563 kubelet[1125]: E0924 00:45:03.951715    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jmtpf" podUID=cfde4b18-d29f-40fb-ba3c-b1eda7029248
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.009863    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e401801d-729b-45ca-94a1-89467ad83c17-tmp\") pod \"storage-provisioner\" (UID: \"e401801d-729b-45ca-94a1-89467ad83c17\") " pod="kube-system/storage-provisioner"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010026    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38108400-0645-407d-a9b3-9713c82117a4-xtables-lock\") pod \"kube-proxy-x4jgx\" (UID: \"38108400-0645-407d-a9b3-9713c82117a4\") " pod="kube-system/kube-proxy-x4jgx"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010058    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume\") pod \"coredns-6d4b75cb6d-jmtpf\" (UID: \"cfde4b18-d29f-40fb-ba3c-b1eda7029248\") " pod="kube-system/coredns-6d4b75cb6d-jmtpf"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010082    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgcn2\" (UniqueName: \"kubernetes.io/projected/e401801d-729b-45ca-94a1-89467ad83c17-kube-api-access-lgcn2\") pod \"storage-provisioner\" (UID: \"e401801d-729b-45ca-94a1-89467ad83c17\") " pod="kube-system/storage-provisioner"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010145    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38108400-0645-407d-a9b3-9713c82117a4-kube-proxy\") pod \"kube-proxy-x4jgx\" (UID: \"38108400-0645-407d-a9b3-9713c82117a4\") " pod="kube-system/kube-proxy-x4jgx"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010297    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38108400-0645-407d-a9b3-9713c82117a4-lib-modules\") pod \"kube-proxy-x4jgx\" (UID: \"38108400-0645-407d-a9b3-9713c82117a4\") " pod="kube-system/kube-proxy-x4jgx"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010346    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94mt\" (UniqueName: \"kubernetes.io/projected/38108400-0645-407d-a9b3-9713c82117a4-kube-api-access-n94mt\") pod \"kube-proxy-x4jgx\" (UID: \"38108400-0645-407d-a9b3-9713c82117a4\") " pod="kube-system/kube-proxy-x4jgx"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010391    1125 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twqmp\" (UniqueName: \"kubernetes.io/projected/cfde4b18-d29f-40fb-ba3c-b1eda7029248-kube-api-access-twqmp\") pod \"coredns-6d4b75cb6d-jmtpf\" (UID: \"cfde4b18-d29f-40fb-ba3c-b1eda7029248\") " pod="kube-system/coredns-6d4b75cb6d-jmtpf"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: I0924 00:45:04.010414    1125 reconciler.go:159] "Reconciler: start to sync state"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: E0924 00:45:04.019706    1125 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: E0924 00:45:04.114563    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: E0924 00:45:04.114735    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume podName:cfde4b18-d29f-40fb-ba3c-b1eda7029248 nodeName:}" failed. No retries permitted until 2024-09-24 00:45:04.614668849 +0000 UTC m=+5.784654710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume") pod "coredns-6d4b75cb6d-jmtpf" (UID: "cfde4b18-d29f-40fb-ba3c-b1eda7029248") : object "kube-system"/"coredns" not registered
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: E0924 00:45:04.618551    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 00:45:04 test-preload-660563 kubelet[1125]: E0924 00:45:04.618644    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume podName:cfde4b18-d29f-40fb-ba3c-b1eda7029248 nodeName:}" failed. No retries permitted until 2024-09-24 00:45:05.618627862 +0000 UTC m=+6.788613717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume") pod "coredns-6d4b75cb6d-jmtpf" (UID: "cfde4b18-d29f-40fb-ba3c-b1eda7029248") : object "kube-system"/"coredns" not registered
	Sep 24 00:45:05 test-preload-660563 kubelet[1125]: E0924 00:45:05.626375    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 00:45:05 test-preload-660563 kubelet[1125]: E0924 00:45:05.626461    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume podName:cfde4b18-d29f-40fb-ba3c-b1eda7029248 nodeName:}" failed. No retries permitted until 2024-09-24 00:45:07.626445123 +0000 UTC m=+8.796430978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume") pod "coredns-6d4b75cb6d-jmtpf" (UID: "cfde4b18-d29f-40fb-ba3c-b1eda7029248") : object "kube-system"/"coredns" not registered
	Sep 24 00:45:06 test-preload-660563 kubelet[1125]: E0924 00:45:06.064056    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jmtpf" podUID=cfde4b18-d29f-40fb-ba3c-b1eda7029248
	Sep 24 00:45:07 test-preload-660563 kubelet[1125]: E0924 00:45:07.642157    1125 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 24 00:45:07 test-preload-660563 kubelet[1125]: E0924 00:45:07.642242    1125 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume podName:cfde4b18-d29f-40fb-ba3c-b1eda7029248 nodeName:}" failed. No retries permitted until 2024-09-24 00:45:11.642225145 +0000 UTC m=+12.812210987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cfde4b18-d29f-40fb-ba3c-b1eda7029248-config-volume") pod "coredns-6d4b75cb6d-jmtpf" (UID: "cfde4b18-d29f-40fb-ba3c-b1eda7029248") : object "kube-system"/"coredns" not registered
	Sep 24 00:45:08 test-preload-660563 kubelet[1125]: E0924 00:45:08.064214    1125 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jmtpf" podUID=cfde4b18-d29f-40fb-ba3c-b1eda7029248
	
	
	==> storage-provisioner [19ec17cd16e13f45e1e3a28b136abe73c0e535d66a2642eec3d31f3a83be5cb2] <==
	I0924 00:45:05.455215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-660563 -n test-preload-660563
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-660563 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-660563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-660563
--- FAIL: TestPreload (184.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (392.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0924 00:48:38.361560   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m46.770175906s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-619300] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-619300" primary control-plane node in "kubernetes-upgrade-619300" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:48:36.914414   51216 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:48:36.914863   51216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:48:36.914877   51216 out.go:358] Setting ErrFile to fd 2...
	I0924 00:48:36.914884   51216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:48:36.915192   51216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:48:36.916136   51216 out.go:352] Setting JSON to false
	I0924 00:48:36.917495   51216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5461,"bootTime":1727133456,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:48:36.917656   51216 start.go:139] virtualization: kvm guest
	I0924 00:48:36.920153   51216 out.go:177] * [kubernetes-upgrade-619300] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:48:36.921790   51216 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:48:36.921806   51216 notify.go:220] Checking for updates...
	I0924 00:48:36.925096   51216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:48:36.926541   51216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:48:36.928236   51216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:48:36.929762   51216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:48:36.931258   51216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:48:36.933194   51216 config.go:182] Loaded profile config "NoKubernetes-198857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:48:36.933353   51216 config.go:182] Loaded profile config "pause-587180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:48:36.933478   51216 config.go:182] Loaded profile config "running-upgrade-216884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0924 00:48:36.933647   51216 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:48:36.981973   51216 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 00:48:36.983167   51216 start.go:297] selected driver: kvm2
	I0924 00:48:36.983184   51216 start.go:901] validating driver "kvm2" against <nil>
	I0924 00:48:36.983199   51216 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:48:36.983968   51216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:48:36.984061   51216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:48:37.003280   51216 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:48:37.003349   51216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 00:48:37.003625   51216 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 00:48:37.003660   51216 cni.go:84] Creating CNI manager for ""
	I0924 00:48:37.003726   51216 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:48:37.003739   51216 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 00:48:37.003806   51216 start.go:340] cluster config:
	{Name:kubernetes-upgrade-619300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-619300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:48:37.003924   51216 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:48:37.007016   51216 out.go:177] * Starting "kubernetes-upgrade-619300" primary control-plane node in "kubernetes-upgrade-619300" cluster
	I0924 00:48:37.008672   51216 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 00:48:37.008734   51216 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 00:48:37.008749   51216 cache.go:56] Caching tarball of preloaded images
	I0924 00:48:37.008839   51216 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:48:37.008853   51216 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 00:48:37.008961   51216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/config.json ...
	I0924 00:48:37.008986   51216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/config.json: {Name:mkd32e8c3556e2e720088ff6d6987c20c38844bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:48:37.009168   51216 start.go:360] acquireMachinesLock for kubernetes-upgrade-619300: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:48:52.504890   51216 start.go:364] duration metric: took 15.495692499s to acquireMachinesLock for "kubernetes-upgrade-619300"
	I0924 00:48:52.505003   51216 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-619300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-619300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:48:52.505121   51216 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 00:48:52.507186   51216 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:48:52.507377   51216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:48:52.507452   51216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:48:52.524240   51216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0924 00:48:52.524767   51216 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:48:52.525374   51216 main.go:141] libmachine: Using API Version  1
	I0924 00:48:52.525401   51216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:48:52.525746   51216 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:48:52.525975   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetMachineName
	I0924 00:48:52.526123   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:48:52.526316   51216 start.go:159] libmachine.API.Create for "kubernetes-upgrade-619300" (driver="kvm2")
	I0924 00:48:52.526351   51216 client.go:168] LocalClient.Create starting
	I0924 00:48:52.526389   51216 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:48:52.526441   51216 main.go:141] libmachine: Decoding PEM data...
	I0924 00:48:52.526466   51216 main.go:141] libmachine: Parsing certificate...
	I0924 00:48:52.526539   51216 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:48:52.526567   51216 main.go:141] libmachine: Decoding PEM data...
	I0924 00:48:52.526587   51216 main.go:141] libmachine: Parsing certificate...
	I0924 00:48:52.526610   51216 main.go:141] libmachine: Running pre-create checks...
	I0924 00:48:52.526624   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .PreCreateCheck
	I0924 00:48:52.527035   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetConfigRaw
	I0924 00:48:52.527552   51216 main.go:141] libmachine: Creating machine...
	I0924 00:48:52.527571   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .Create
	I0924 00:48:52.527712   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Creating KVM machine...
	I0924 00:48:52.529087   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found existing default KVM network
	I0924 00:48:52.531158   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:52.530959   51477 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026a1a0}
	I0924 00:48:52.531189   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | created network xml: 
	I0924 00:48:52.531203   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | <network>
	I0924 00:48:52.531213   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |   <name>mk-kubernetes-upgrade-619300</name>
	I0924 00:48:52.531228   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |   <dns enable='no'/>
	I0924 00:48:52.531235   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |   
	I0924 00:48:52.531245   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0924 00:48:52.531255   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |     <dhcp>
	I0924 00:48:52.531265   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0924 00:48:52.531275   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |     </dhcp>
	I0924 00:48:52.531308   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |   </ip>
	I0924 00:48:52.531331   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG |   
	I0924 00:48:52.531365   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | </network>
	I0924 00:48:52.531396   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | 
	I0924 00:48:52.537145   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | trying to create private KVM network mk-kubernetes-upgrade-619300 192.168.39.0/24...
	I0924 00:48:52.619719   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | private KVM network mk-kubernetes-upgrade-619300 192.168.39.0/24 created
	I0924 00:48:52.619772   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:52.619669   51477 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:48:52.619798   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300 ...
	I0924 00:48:52.619815   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:48:52.619842   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:48:52.858785   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:52.858676   51477 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa...
	I0924 00:48:52.901920   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:52.901775   51477 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/kubernetes-upgrade-619300.rawdisk...
	I0924 00:48:52.901977   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Writing magic tar header
	I0924 00:48:52.901992   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Writing SSH key tar header
	I0924 00:48:52.902004   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:52.901935   51477 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300 ...
	I0924 00:48:52.902108   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300
	I0924 00:48:52.902153   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:48:52.902172   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300 (perms=drwx------)
	I0924 00:48:52.902191   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:48:52.902201   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:48:52.902216   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:48:52.902229   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:48:52.902242   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:48:52.902255   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:48:52.902268   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:48:52.902283   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:48:52.902292   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:48:52.902301   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Checking permissions on dir: /home
	I0924 00:48:52.902312   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Skipping /home - not owner
	I0924 00:48:52.902339   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Creating domain...
	I0924 00:48:52.903451   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) define libvirt domain using xml: 
	I0924 00:48:52.903462   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) <domain type='kvm'>
	I0924 00:48:52.903493   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   <name>kubernetes-upgrade-619300</name>
	I0924 00:48:52.903517   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   <memory unit='MiB'>2200</memory>
	I0924 00:48:52.903530   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   <vcpu>2</vcpu>
	I0924 00:48:52.903544   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   <features>
	I0924 00:48:52.903566   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <acpi/>
	I0924 00:48:52.903575   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <apic/>
	I0924 00:48:52.903581   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <pae/>
	I0924 00:48:52.903609   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     
	I0924 00:48:52.903622   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   </features>
	I0924 00:48:52.903633   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   <cpu mode='host-passthrough'>
	I0924 00:48:52.903641   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   
	I0924 00:48:52.903650   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   </cpu>
	I0924 00:48:52.903659   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   <os>
	I0924 00:48:52.903669   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <type>hvm</type>
	I0924 00:48:52.903676   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <boot dev='cdrom'/>
	I0924 00:48:52.903689   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <boot dev='hd'/>
	I0924 00:48:52.903700   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <bootmenu enable='no'/>
	I0924 00:48:52.903706   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   </os>
	I0924 00:48:52.903715   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   <devices>
	I0924 00:48:52.903726   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <disk type='file' device='cdrom'>
	I0924 00:48:52.903744   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/boot2docker.iso'/>
	I0924 00:48:52.903763   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <target dev='hdc' bus='scsi'/>
	I0924 00:48:52.903771   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <readonly/>
	I0924 00:48:52.903775   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     </disk>
	I0924 00:48:52.903787   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <disk type='file' device='disk'>
	I0924 00:48:52.903799   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:48:52.903820   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/kubernetes-upgrade-619300.rawdisk'/>
	I0924 00:48:52.903834   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <target dev='hda' bus='virtio'/>
	I0924 00:48:52.903864   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     </disk>
	I0924 00:48:52.903889   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <interface type='network'>
	I0924 00:48:52.903901   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <source network='mk-kubernetes-upgrade-619300'/>
	I0924 00:48:52.903915   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <model type='virtio'/>
	I0924 00:48:52.903927   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     </interface>
	I0924 00:48:52.903935   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <interface type='network'>
	I0924 00:48:52.903946   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <source network='default'/>
	I0924 00:48:52.903953   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <model type='virtio'/>
	I0924 00:48:52.903963   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     </interface>
	I0924 00:48:52.903970   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <serial type='pty'>
	I0924 00:48:52.903982   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <target port='0'/>
	I0924 00:48:52.903992   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     </serial>
	I0924 00:48:52.904001   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <console type='pty'>
	I0924 00:48:52.904012   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <target type='serial' port='0'/>
	I0924 00:48:52.904037   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     </console>
	I0924 00:48:52.904047   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     <rng model='virtio'>
	I0924 00:48:52.904073   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)       <backend model='random'>/dev/random</backend>
	I0924 00:48:52.904094   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     </rng>
	I0924 00:48:52.904108   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     
	I0924 00:48:52.904117   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)     
	I0924 00:48:52.904126   51216 main.go:141] libmachine: (kubernetes-upgrade-619300)   </devices>
	I0924 00:48:52.904136   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) </domain>
	I0924 00:48:52.904146   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) 
	I0924 00:48:52.908409   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:a7:b1:03 in network default
	I0924 00:48:52.909061   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Ensuring networks are active...
	I0924 00:48:52.909088   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:52.909948   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Ensuring network default is active
	I0924 00:48:52.910264   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Ensuring network mk-kubernetes-upgrade-619300 is active
	I0924 00:48:52.910725   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Getting domain xml...
	I0924 00:48:52.911402   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Creating domain...
	I0924 00:48:54.201385   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Waiting to get IP...
	I0924 00:48:54.202379   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:54.202964   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:54.202993   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:54.202911   51477 retry.go:31] will retry after 249.971028ms: waiting for machine to come up
	I0924 00:48:54.454738   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:54.455364   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:54.455387   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:54.455281   51477 retry.go:31] will retry after 388.838359ms: waiting for machine to come up
	I0924 00:48:54.846024   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:54.846428   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:54.846448   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:54.846384   51477 retry.go:31] will retry after 300.350277ms: waiting for machine to come up
	I0924 00:48:55.147823   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:55.148374   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:55.148402   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:55.148318   51477 retry.go:31] will retry after 423.03418ms: waiting for machine to come up
	I0924 00:48:55.572789   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:55.573342   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:55.573373   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:55.573286   51477 retry.go:31] will retry after 588.084418ms: waiting for machine to come up
	I0924 00:48:56.163152   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:56.163719   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:56.163758   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:56.163685   51477 retry.go:31] will retry after 697.538589ms: waiting for machine to come up
	I0924 00:48:56.863238   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:56.863792   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:56.863840   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:56.863699   51477 retry.go:31] will retry after 846.972275ms: waiting for machine to come up
	I0924 00:48:57.712267   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:57.712823   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:57.712850   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:57.712778   51477 retry.go:31] will retry after 1.058303343s: waiting for machine to come up
	I0924 00:48:58.773215   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:48:58.773688   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:48:58.773711   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:48:58.773641   51477 retry.go:31] will retry after 1.440935057s: waiting for machine to come up
	I0924 00:49:00.216433   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:00.216912   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:49:00.216937   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:49:00.216863   51477 retry.go:31] will retry after 1.657090143s: waiting for machine to come up
	I0924 00:49:01.875333   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:01.875866   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:49:01.875897   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:49:01.875807   51477 retry.go:31] will retry after 2.405837648s: waiting for machine to come up
	I0924 00:49:04.283613   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:04.284119   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:49:04.284156   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:49:04.284089   51477 retry.go:31] will retry after 3.480825016s: waiting for machine to come up
	I0924 00:49:07.766911   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:07.767485   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:49:07.767514   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:49:07.767435   51477 retry.go:31] will retry after 3.522169862s: waiting for machine to come up
	I0924 00:49:11.293431   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:11.293982   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find current IP address of domain kubernetes-upgrade-619300 in network mk-kubernetes-upgrade-619300
	I0924 00:49:11.294001   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | I0924 00:49:11.293936   51477 retry.go:31] will retry after 4.630780135s: waiting for machine to come up
	I0924 00:49:15.929377   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:15.929944   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Found IP for machine: 192.168.39.119
	I0924 00:49:15.929972   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has current primary IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:15.929993   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Reserving static IP address...
	I0924 00:49:15.930339   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-619300", mac: "52:54:00:b6:81:fa", ip: "192.168.39.119"} in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.012441   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Getting to WaitForSSH function...
	I0924 00:49:16.012476   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Reserved static IP address: 192.168.39.119
	I0924 00:49:16.012491   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Waiting for SSH to be available...
	I0924 00:49:16.015204   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.015799   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:16.015841   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.015993   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Using SSH client type: external
	I0924 00:49:16.016021   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa (-rw-------)
	I0924 00:49:16.016060   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:49:16.016074   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | About to run SSH command:
	I0924 00:49:16.016118   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | exit 0
	I0924 00:49:16.140249   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | SSH cmd err, output: <nil>: 
	I0924 00:49:16.140653   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) KVM machine creation complete!
	I0924 00:49:16.140982   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetConfigRaw
	I0924 00:49:16.141666   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:49:16.141868   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:49:16.142058   51216 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:49:16.142090   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetState
	I0924 00:49:16.143655   51216 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:49:16.143671   51216 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:49:16.143678   51216 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:49:16.143687   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:16.146181   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.146594   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:16.146633   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.146728   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:16.146893   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.147039   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.147164   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:16.147427   51216 main.go:141] libmachine: Using SSH client type: native
	I0924 00:49:16.147689   51216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I0924 00:49:16.147705   51216 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:49:16.251501   51216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:49:16.251526   51216 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:49:16.251538   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:16.254528   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.254995   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:16.255025   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.255188   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:16.255381   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.255595   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.255747   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:16.255902   51216 main.go:141] libmachine: Using SSH client type: native
	I0924 00:49:16.256134   51216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I0924 00:49:16.256155   51216 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:49:16.357151   51216 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:49:16.357264   51216 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:49:16.357280   51216 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:49:16.357290   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetMachineName
	I0924 00:49:16.357601   51216 buildroot.go:166] provisioning hostname "kubernetes-upgrade-619300"
	I0924 00:49:16.357628   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetMachineName
	I0924 00:49:16.357791   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:16.360687   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.361068   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:16.361110   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.361334   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:16.361537   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.361746   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.361908   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:16.362101   51216 main.go:141] libmachine: Using SSH client type: native
	I0924 00:49:16.362287   51216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I0924 00:49:16.362305   51216 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-619300 && echo "kubernetes-upgrade-619300" | sudo tee /etc/hostname
	I0924 00:49:16.479235   51216 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-619300
	
	I0924 00:49:16.479272   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:16.482508   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.482921   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:16.482953   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.483159   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:16.483362   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.483521   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:16.483705   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:16.483881   51216 main.go:141] libmachine: Using SSH client type: native
	I0924 00:49:16.484042   51216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I0924 00:49:16.484058   51216 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-619300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-619300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-619300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:49:16.589426   51216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:49:16.589468   51216 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:49:16.589494   51216 buildroot.go:174] setting up certificates
	I0924 00:49:16.589520   51216 provision.go:84] configureAuth start
	I0924 00:49:16.589538   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetMachineName
	I0924 00:49:16.589858   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetIP
	I0924 00:49:16.592812   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.593125   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:16.593167   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.593333   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:16.595614   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.595955   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:16.596004   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:16.596149   51216 provision.go:143] copyHostCerts
	I0924 00:49:16.596205   51216 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:49:16.596221   51216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:49:16.596272   51216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:49:16.596424   51216 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:49:16.596436   51216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:49:16.596483   51216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:49:16.596585   51216 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:49:16.596598   51216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:49:16.596633   51216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:49:16.596699   51216 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-619300 san=[127.0.0.1 192.168.39.119 kubernetes-upgrade-619300 localhost minikube]
	I0924 00:49:17.035935   51216 provision.go:177] copyRemoteCerts
	I0924 00:49:17.035999   51216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:49:17.036025   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:17.039325   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.039792   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.039826   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.040016   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:17.040242   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.040423   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:17.040561   51216 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa Username:docker}
	I0924 00:49:17.118751   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:49:17.143742   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0924 00:49:17.170760   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:49:17.198337   51216 provision.go:87] duration metric: took 608.797918ms to configureAuth
	I0924 00:49:17.198371   51216 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:49:17.198581   51216 config.go:182] Loaded profile config "kubernetes-upgrade-619300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 00:49:17.198657   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:17.201616   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.202048   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.202071   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.202354   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:17.202579   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.202776   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.202939   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:17.203141   51216 main.go:141] libmachine: Using SSH client type: native
	I0924 00:49:17.203353   51216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I0924 00:49:17.203379   51216 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:49:17.456272   51216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:49:17.456303   51216 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:49:17.456316   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetURL
	I0924 00:49:17.457730   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Using libvirt version 6000000
	I0924 00:49:17.460623   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.460985   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.461007   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.461145   51216 main.go:141] libmachine: Docker is up and running!
	I0924 00:49:17.461161   51216 main.go:141] libmachine: Reticulating splines...
	I0924 00:49:17.461169   51216 client.go:171] duration metric: took 24.934807906s to LocalClient.Create
	I0924 00:49:17.461193   51216 start.go:167] duration metric: took 24.934880086s to libmachine.API.Create "kubernetes-upgrade-619300"
	I0924 00:49:17.461205   51216 start.go:293] postStartSetup for "kubernetes-upgrade-619300" (driver="kvm2")
	I0924 00:49:17.461221   51216 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:49:17.461249   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:49:17.461508   51216 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:49:17.461529   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:17.463866   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.464235   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.464262   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.464403   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:17.464554   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.464734   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:17.464919   51216 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa Username:docker}
	I0924 00:49:17.547207   51216 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:49:17.551438   51216 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:49:17.551472   51216 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:49:17.551547   51216 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:49:17.551685   51216 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:49:17.551833   51216 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:49:17.562845   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:49:17.595140   51216 start.go:296] duration metric: took 133.915556ms for postStartSetup
	I0924 00:49:17.595205   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetConfigRaw
	I0924 00:49:17.596075   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetIP
	I0924 00:49:17.600106   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.600746   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.600789   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.601209   51216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/config.json ...
	I0924 00:49:17.601446   51216 start.go:128] duration metric: took 25.096312288s to createHost
	I0924 00:49:17.601486   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:17.604495   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.604795   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.604828   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.604984   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:17.605192   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.605390   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.605518   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:17.605706   51216 main.go:141] libmachine: Using SSH client type: native
	I0924 00:49:17.605910   51216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I0924 00:49:17.605932   51216 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:49:17.709996   51216 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727138957.670486433
	
	I0924 00:49:17.710022   51216 fix.go:216] guest clock: 1727138957.670486433
	I0924 00:49:17.710031   51216 fix.go:229] Guest: 2024-09-24 00:49:17.670486433 +0000 UTC Remote: 2024-09-24 00:49:17.601458492 +0000 UTC m=+40.738798337 (delta=69.027941ms)
	I0924 00:49:17.710093   51216 fix.go:200] guest clock delta is within tolerance: 69.027941ms
	I0924 00:49:17.710105   51216 start.go:83] releasing machines lock for "kubernetes-upgrade-619300", held for 25.205156016s
	I0924 00:49:17.710144   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:49:17.710395   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetIP
	I0924 00:49:17.713701   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.714092   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.714125   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.714341   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:49:17.714927   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:49:17.715128   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:49:17.715245   51216 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:49:17.715302   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:17.715406   51216 ssh_runner.go:195] Run: cat /version.json
	I0924 00:49:17.715429   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:49:17.718830   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.718933   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.719385   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.719417   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:17.719446   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.719468   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:17.719762   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:17.719925   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.719928   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:49:17.720145   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:49:17.720148   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:17.720316   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:49:17.720352   51216 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa Username:docker}
	I0924 00:49:17.720470   51216 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa Username:docker}
	I0924 00:49:17.838887   51216 ssh_runner.go:195] Run: systemctl --version
	I0924 00:49:17.845351   51216 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:49:18.017686   51216 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:49:18.025252   51216 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:49:18.025319   51216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:49:18.041814   51216 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:49:18.041837   51216 start.go:495] detecting cgroup driver to use...
	I0924 00:49:18.041897   51216 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:49:18.059124   51216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:49:18.073887   51216 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:49:18.073951   51216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:49:18.088807   51216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:49:18.104136   51216 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:49:18.227394   51216 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:49:18.397622   51216 docker.go:233] disabling docker service ...
	I0924 00:49:18.397699   51216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:49:18.412152   51216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:49:18.425806   51216 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:49:18.572893   51216 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:49:18.732270   51216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:49:18.747535   51216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:49:18.770267   51216 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 00:49:18.770444   51216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:49:18.781500   51216 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:49:18.781601   51216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:49:18.795987   51216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:49:18.807052   51216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:49:18.818497   51216 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:49:18.832407   51216 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:49:18.844639   51216 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:49:18.844703   51216 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:49:18.860237   51216 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:49:18.870479   51216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:49:18.993759   51216 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:49:19.094082   51216 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:49:19.094165   51216 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:49:19.099164   51216 start.go:563] Will wait 60s for crictl version
	I0924 00:49:19.099218   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:19.103640   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:49:19.145238   51216 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:49:19.145331   51216 ssh_runner.go:195] Run: crio --version
	I0924 00:49:19.175502   51216 ssh_runner.go:195] Run: crio --version
	I0924 00:49:19.216888   51216 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 00:49:19.218389   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetIP
	I0924 00:49:19.222277   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:19.222721   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:49:19.222761   51216 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:49:19.223004   51216 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 00:49:19.227221   51216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:49:19.241866   51216 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-619300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-619300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:49:19.241990   51216 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 00:49:19.242033   51216 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:49:19.275111   51216 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 00:49:19.275175   51216 ssh_runner.go:195] Run: which lz4
	I0924 00:49:19.279151   51216 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 00:49:19.283606   51216 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 00:49:19.283648   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 00:49:20.896650   51216 crio.go:462] duration metric: took 1.617538305s to copy over tarball
	I0924 00:49:20.896722   51216 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 00:49:23.534681   51216 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.637927191s)
	I0924 00:49:23.534716   51216 crio.go:469] duration metric: took 2.638038196s to extract the tarball
	I0924 00:49:23.534726   51216 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 00:49:23.579432   51216 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:49:23.629256   51216 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 00:49:23.629283   51216 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 00:49:23.629346   51216 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:49:23.629375   51216 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:49:23.629404   51216 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 00:49:23.629420   51216 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:49:23.629438   51216 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 00:49:23.629452   51216 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:49:23.629413   51216 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 00:49:23.629501   51216 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:49:23.630874   51216 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:49:23.630882   51216 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 00:49:23.630889   51216 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 00:49:23.630894   51216 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:49:23.630902   51216 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:49:23.630931   51216 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 00:49:23.630936   51216 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:49:23.630983   51216 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:49:23.876912   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:49:23.917454   51216 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 00:49:23.917501   51216 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:49:23.917551   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:23.919795   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 00:49:23.921643   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:49:23.925692   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:49:23.936516   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 00:49:23.940608   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 00:49:23.948513   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:49:23.950879   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:49:24.038094   51216 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 00:49:24.038141   51216 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 00:49:24.038185   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:24.038190   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:49:24.042159   51216 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 00:49:24.042193   51216 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:49:24.042238   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:24.133829   51216 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 00:49:24.133861   51216 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 00:49:24.133888   51216 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 00:49:24.133895   51216 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 00:49:24.133937   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:24.133938   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:24.148953   51216 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 00:49:24.149001   51216 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:49:24.149064   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:24.150131   51216 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 00:49:24.150162   51216 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:49:24.150205   51216 ssh_runner.go:195] Run: which crictl
	I0924 00:49:24.150220   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 00:49:24.150292   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:49:24.150341   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:49:24.150524   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 00:49:24.150577   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 00:49:24.265107   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:49:24.265115   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:49:24.265201   51216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 00:49:24.265204   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 00:49:24.265288   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:49:24.265318   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 00:49:24.265373   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 00:49:24.379677   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 00:49:24.379797   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:49:24.404380   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:49:24.404418   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 00:49:24.404471   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:49:24.404496   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 00:49:24.459283   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:49:24.459397   51216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 00:49:24.549638   51216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 00:49:24.549744   51216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 00:49:24.549770   51216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:49:24.549787   51216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 00:49:24.549849   51216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 00:49:24.584714   51216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 00:49:24.872251   51216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:49:25.012728   51216 cache_images.go:92] duration metric: took 1.383428164s to LoadCachedImages
	W0924 00:49:25.012888   51216 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0924 00:49:25.012917   51216 kubeadm.go:934] updating node { 192.168.39.119 8443 v1.20.0 crio true true} ...
	I0924 00:49:25.013051   51216 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-619300 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-619300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:49:25.013169   51216 ssh_runner.go:195] Run: crio config
	I0924 00:49:25.061431   51216 cni.go:84] Creating CNI manager for ""
	I0924 00:49:25.061456   51216 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:49:25.061464   51216 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:49:25.061483   51216 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-619300 NodeName:kubernetes-upgrade-619300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 00:49:25.061623   51216 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-619300"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:49:25.061682   51216 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 00:49:25.072805   51216 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:49:25.072876   51216 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 00:49:25.083202   51216 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0924 00:49:25.100391   51216 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:49:25.117765   51216 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0924 00:49:25.134398   51216 ssh_runner.go:195] Run: grep 192.168.39.119	control-plane.minikube.internal$ /etc/hosts
	I0924 00:49:25.138213   51216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:49:25.151216   51216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:49:25.293237   51216 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:49:25.311278   51216 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300 for IP: 192.168.39.119
	I0924 00:49:25.311304   51216 certs.go:194] generating shared ca certs ...
	I0924 00:49:25.311325   51216 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:49:25.311496   51216 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:49:25.311552   51216 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:49:25.311566   51216 certs.go:256] generating profile certs ...
	I0924 00:49:25.311657   51216 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.key
	I0924 00:49:25.311674   51216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.crt with IP's: []
	I0924 00:49:25.773908   51216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.crt ...
	I0924 00:49:25.773943   51216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.crt: {Name:mk9499adda4974f68209fa59f76a25b2ce5648d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:49:25.774142   51216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.key ...
	I0924 00:49:25.774163   51216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.key: {Name:mk42d41eacbea4d200f4c3672041928142160606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:49:25.774269   51216 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.key.6de233c6
	I0924 00:49:25.774291   51216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.crt.6de233c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.119]
	I0924 00:49:26.147751   51216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.crt.6de233c6 ...
	I0924 00:49:26.147784   51216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.crt.6de233c6: {Name:mk501395ac5738f5c3d03cca44e2a75e5bbb71a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:49:26.147957   51216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.key.6de233c6 ...
	I0924 00:49:26.147977   51216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.key.6de233c6: {Name:mk110f3641a3de2999e25e521a14cb128f3ec273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:49:26.148078   51216 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.crt.6de233c6 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.crt
	I0924 00:49:26.148177   51216 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.key.6de233c6 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.key
	I0924 00:49:26.148264   51216 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.key
	I0924 00:49:26.148286   51216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.crt with IP's: []
	I0924 00:49:26.319812   51216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.crt ...
	I0924 00:49:26.319844   51216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.crt: {Name:mk7425ff4766217e11e5f2ce17710f4c34db3dc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:49:26.320029   51216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.key ...
	I0924 00:49:26.320048   51216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.key: {Name:mkf589e7e7e07fbd3814abd6a300fb4213dcac0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:49:26.320240   51216 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:49:26.320300   51216 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:49:26.320316   51216 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:49:26.320390   51216 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:49:26.320428   51216 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:49:26.320461   51216 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:49:26.320518   51216 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:49:26.321093   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:49:26.355035   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:49:26.394313   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:49:26.419616   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:49:26.451402   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0924 00:49:26.476547   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:49:26.504931   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:49:26.530460   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:49:26.556287   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:49:26.581942   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:49:26.607873   51216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:49:26.634428   51216 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:49:26.652324   51216 ssh_runner.go:195] Run: openssl version
	I0924 00:49:26.658157   51216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:49:26.669287   51216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:49:26.674406   51216 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:49:26.674487   51216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:49:26.680861   51216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:49:26.693455   51216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:49:26.705138   51216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:49:26.710093   51216 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:49:26.710166   51216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:49:26.716586   51216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:49:26.727957   51216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:49:26.740178   51216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:49:26.745093   51216 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:49:26.745159   51216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:49:26.751949   51216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:49:26.762904   51216 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:49:26.768209   51216 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:49:26.768284   51216 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-619300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-619300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:49:26.768389   51216 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 00:49:26.768481   51216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:49:26.811568   51216 cri.go:89] found id: ""
	I0924 00:49:26.811640   51216 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 00:49:26.822830   51216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 00:49:26.834559   51216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 00:49:26.845758   51216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 00:49:26.845783   51216 kubeadm.go:157] found existing configuration files:
	
	I0924 00:49:26.845837   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 00:49:26.856198   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 00:49:26.856300   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 00:49:26.866712   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 00:49:26.877307   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 00:49:26.877379   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 00:49:26.887965   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 00:49:26.897278   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 00:49:26.897346   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 00:49:26.908159   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 00:49:26.918075   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 00:49:26.918135   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 00:49:26.928498   51216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 00:49:27.229333   51216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:51:25.250687   51216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 00:51:25.250766   51216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 00:51:25.252231   51216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 00:51:25.252288   51216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 00:51:25.252407   51216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 00:51:25.252559   51216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 00:51:25.252696   51216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 00:51:25.252765   51216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 00:51:25.255006   51216 out.go:235]   - Generating certificates and keys ...
	I0924 00:51:25.255070   51216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 00:51:25.255121   51216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 00:51:25.255178   51216 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 00:51:25.255228   51216 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 00:51:25.255277   51216 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 00:51:25.255349   51216 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 00:51:25.255416   51216 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 00:51:25.255583   51216 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-619300 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	I0924 00:51:25.255671   51216 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 00:51:25.255858   51216 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-619300 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	I0924 00:51:25.255930   51216 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 00:51:25.256025   51216 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 00:51:25.256068   51216 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 00:51:25.256115   51216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:51:25.256158   51216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:51:25.256202   51216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:51:25.256295   51216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:51:25.256407   51216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:51:25.256560   51216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:51:25.256697   51216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:51:25.256754   51216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:51:25.256816   51216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:51:25.258528   51216 out.go:235]   - Booting up control plane ...
	I0924 00:51:25.258639   51216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:51:25.258740   51216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:51:25.258801   51216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:51:25.258906   51216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:51:25.259039   51216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 00:51:25.259095   51216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 00:51:25.259173   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:51:25.259379   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:51:25.259444   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:51:25.259600   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:51:25.259678   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:51:25.259897   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:51:25.260002   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:51:25.260225   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:51:25.260350   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:51:25.260527   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:51:25.260536   51216 kubeadm.go:310] 
	I0924 00:51:25.260570   51216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 00:51:25.260611   51216 kubeadm.go:310] 		timed out waiting for the condition
	I0924 00:51:25.260621   51216 kubeadm.go:310] 
	I0924 00:51:25.260649   51216 kubeadm.go:310] 	This error is likely caused by:
	I0924 00:51:25.260678   51216 kubeadm.go:310] 		- The kubelet is not running
	I0924 00:51:25.260768   51216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 00:51:25.260775   51216 kubeadm.go:310] 
	I0924 00:51:25.260866   51216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 00:51:25.260895   51216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 00:51:25.260942   51216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 00:51:25.260956   51216 kubeadm.go:310] 
	I0924 00:51:25.261098   51216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 00:51:25.261224   51216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 00:51:25.261239   51216 kubeadm.go:310] 
	I0924 00:51:25.261378   51216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 00:51:25.261474   51216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 00:51:25.261570   51216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 00:51:25.261680   51216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 00:51:25.261694   51216 kubeadm.go:310] 
	W0924 00:51:25.261866   51216 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-619300 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-619300 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-619300 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-619300 localhost] and IPs [192.168.39.119 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 00:51:25.261902   51216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 00:51:26.468641   51216 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.206712008s)
	I0924 00:51:26.468731   51216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:51:26.482968   51216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 00:51:26.492666   51216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 00:51:26.492688   51216 kubeadm.go:157] found existing configuration files:
	
	I0924 00:51:26.492730   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 00:51:26.502173   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 00:51:26.502228   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 00:51:26.511641   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 00:51:26.520723   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 00:51:26.520778   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 00:51:26.530811   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 00:51:26.539753   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 00:51:26.539817   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 00:51:26.549505   51216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 00:51:26.559539   51216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 00:51:26.559597   51216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 00:51:26.568937   51216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 00:51:26.639396   51216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 00:51:26.639516   51216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 00:51:26.782358   51216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 00:51:26.782510   51216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 00:51:26.782647   51216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 00:51:26.969708   51216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 00:51:26.971560   51216 out.go:235]   - Generating certificates and keys ...
	I0924 00:51:26.971650   51216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 00:51:26.971724   51216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 00:51:26.971797   51216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 00:51:26.971912   51216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 00:51:26.972042   51216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 00:51:26.972149   51216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 00:51:26.972244   51216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 00:51:26.972359   51216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 00:51:26.972467   51216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 00:51:26.972585   51216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 00:51:26.972645   51216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 00:51:26.972741   51216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:51:27.163518   51216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:51:27.358496   51216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:51:27.486603   51216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:51:27.785357   51216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:51:27.800959   51216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:51:27.805164   51216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:51:27.805221   51216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:51:27.939499   51216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:51:27.941496   51216 out.go:235]   - Booting up control plane ...
	I0924 00:51:27.941596   51216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:51:27.945447   51216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:51:27.945566   51216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:51:27.946422   51216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:51:27.948730   51216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 00:52:07.950394   51216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 00:52:07.950534   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:52:07.950835   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:52:12.952815   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:52:12.953078   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:52:22.953752   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:52:22.953953   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:52:42.955226   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:52:42.955455   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:53:22.955080   51216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:53:22.955358   51216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:53:22.955389   51216 kubeadm.go:310] 
	I0924 00:53:22.955444   51216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 00:53:22.955509   51216 kubeadm.go:310] 		timed out waiting for the condition
	I0924 00:53:22.955526   51216 kubeadm.go:310] 
	I0924 00:53:22.955567   51216 kubeadm.go:310] 	This error is likely caused by:
	I0924 00:53:22.955626   51216 kubeadm.go:310] 		- The kubelet is not running
	I0924 00:53:22.955760   51216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 00:53:22.955773   51216 kubeadm.go:310] 
	I0924 00:53:22.955929   51216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 00:53:22.955977   51216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 00:53:22.956025   51216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 00:53:22.956035   51216 kubeadm.go:310] 
	I0924 00:53:22.956189   51216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 00:53:22.956345   51216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 00:53:22.956357   51216 kubeadm.go:310] 
	I0924 00:53:22.956517   51216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 00:53:22.956647   51216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 00:53:22.956718   51216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 00:53:22.956783   51216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 00:53:22.956791   51216 kubeadm.go:310] 
	I0924 00:53:22.958742   51216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:53:22.958883   51216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 00:53:22.958985   51216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 00:53:22.959024   51216 kubeadm.go:394] duration metric: took 3m56.190744008s to StartCluster
	I0924 00:53:22.959068   51216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 00:53:22.959124   51216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 00:53:23.013280   51216 cri.go:89] found id: ""
	I0924 00:53:23.013308   51216 logs.go:276] 0 containers: []
	W0924 00:53:23.013319   51216 logs.go:278] No container was found matching "kube-apiserver"
	I0924 00:53:23.013327   51216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 00:53:23.013381   51216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 00:53:23.051361   51216 cri.go:89] found id: ""
	I0924 00:53:23.051397   51216 logs.go:276] 0 containers: []
	W0924 00:53:23.051410   51216 logs.go:278] No container was found matching "etcd"
	I0924 00:53:23.051418   51216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 00:53:23.051486   51216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 00:53:23.089855   51216 cri.go:89] found id: ""
	I0924 00:53:23.089886   51216 logs.go:276] 0 containers: []
	W0924 00:53:23.089898   51216 logs.go:278] No container was found matching "coredns"
	I0924 00:53:23.089906   51216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 00:53:23.089971   51216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 00:53:23.131042   51216 cri.go:89] found id: ""
	I0924 00:53:23.131070   51216 logs.go:276] 0 containers: []
	W0924 00:53:23.131080   51216 logs.go:278] No container was found matching "kube-scheduler"
	I0924 00:53:23.131089   51216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 00:53:23.131152   51216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 00:53:23.170675   51216 cri.go:89] found id: ""
	I0924 00:53:23.170707   51216 logs.go:276] 0 containers: []
	W0924 00:53:23.170718   51216 logs.go:278] No container was found matching "kube-proxy"
	I0924 00:53:23.170725   51216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 00:53:23.170784   51216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 00:53:23.212842   51216 cri.go:89] found id: ""
	I0924 00:53:23.212863   51216 logs.go:276] 0 containers: []
	W0924 00:53:23.212874   51216 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 00:53:23.212882   51216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 00:53:23.212935   51216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 00:53:23.251488   51216 cri.go:89] found id: ""
	I0924 00:53:23.251520   51216 logs.go:276] 0 containers: []
	W0924 00:53:23.251531   51216 logs.go:278] No container was found matching "kindnet"
	I0924 00:53:23.251542   51216 logs.go:123] Gathering logs for kubelet ...
	I0924 00:53:23.251556   51216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 00:53:23.313369   51216 logs.go:123] Gathering logs for dmesg ...
	I0924 00:53:23.313406   51216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 00:53:23.327567   51216 logs.go:123] Gathering logs for describe nodes ...
	I0924 00:53:23.327597   51216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 00:53:23.466848   51216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 00:53:23.466883   51216 logs.go:123] Gathering logs for CRI-O ...
	I0924 00:53:23.466898   51216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 00:53:23.575794   51216 logs.go:123] Gathering logs for container status ...
	I0924 00:53:23.575831   51216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 00:53:23.616299   51216 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 00:53:23.616393   51216 out.go:270] * 
	* 
	W0924 00:53:23.616486   51216 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 00:53:23.616512   51216 out.go:270] * 
	* 
	W0924 00:53:23.617449   51216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 00:53:23.620673   51216 out.go:201] 
	W0924 00:53:23.621934   51216 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 00:53:23.621977   51216 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 00:53:23.622010   51216 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 00:53:23.623238   51216 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-619300
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-619300: (1.632807832s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-619300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-619300 status --format={{.Host}}: exit status 7 (66.267761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.722623491s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-619300 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (84.659328ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-619300] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-619300
	    minikube start -p kubernetes-upgrade-619300 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6193002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-619300 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-619300 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.652380848s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-24 00:55:04.89245346 +0000 UTC m=+4646.323535909
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-619300 -n kubernetes-upgrade-619300
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-619300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-619300 logs -n 25: (2.128493879s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-447054 sudo                 | cilium-447054             | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-447054 sudo                 | cilium-447054             | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-447054 sudo find            | cilium-447054             | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-447054 sudo crio            | cilium-447054             | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-447054                      | cilium-447054             | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC | 24 Sep 24 00:52 UTC |
	| delete  | -p force-systemd-env-762606           | force-systemd-env-762606  | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC | 24 Sep 24 00:52 UTC |
	| start   | -p cert-expiration-811247             | cert-expiration-811247    | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC | 24 Sep 24 00:53 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-198857                | NoKubernetes-198857       | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC | 24 Sep 24 00:52 UTC |
	| start   | -p force-systemd-flag-912275          | force-systemd-flag-912275 | jenkins | v1.34.0 | 24 Sep 24 00:52 UTC | 24 Sep 24 00:53 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-075175             | minikube                  | jenkins | v1.26.0 | 24 Sep 24 00:52 UTC | 24 Sep 24 00:54 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-619300          | kubernetes-upgrade-619300 | jenkins | v1.34.0 | 24 Sep 24 00:53 UTC | 24 Sep 24 00:53 UTC |
	| ssh     | force-systemd-flag-912275 ssh cat     | force-systemd-flag-912275 | jenkins | v1.34.0 | 24 Sep 24 00:53 UTC | 24 Sep 24 00:53 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-912275          | force-systemd-flag-912275 | jenkins | v1.34.0 | 24 Sep 24 00:53 UTC | 24 Sep 24 00:53 UTC |
	| start   | -p kubernetes-upgrade-619300          | kubernetes-upgrade-619300 | jenkins | v1.34.0 | 24 Sep 24 00:53 UTC | 24 Sep 24 00:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-options-393867                | cert-options-393867       | jenkins | v1.34.0 | 24 Sep 24 00:53 UTC | 24 Sep 24 00:54 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-075175 stop           | minikube                  | jenkins | v1.26.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:54 UTC |
	| start   | -p stopped-upgrade-075175             | stopped-upgrade-075175    | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-619300          | kubernetes-upgrade-619300 | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-619300          | kubernetes-upgrade-619300 | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-393867 ssh               | cert-options-393867       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:54 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-393867 -- sudo        | cert-options-393867       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:54 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-393867                | cert-options-393867       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:54 UTC |
	| start   | -p old-k8s-version-171598             | old-k8s-version-171598    | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-075175             | stopped-upgrade-075175    | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p no-preload-674057                  | no-preload-674057         | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:55:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:55:00.309763   58626 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:55:00.309907   58626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:55:00.309920   58626 out.go:358] Setting ErrFile to fd 2...
	I0924 00:55:00.309929   58626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:55:00.310196   58626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:55:00.310940   58626 out.go:352] Setting JSON to false
	I0924 00:55:00.312159   58626 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5844,"bootTime":1727133456,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:55:00.312298   58626 start.go:139] virtualization: kvm guest
	I0924 00:55:00.315169   58626 out.go:177] * [no-preload-674057] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:55:00.316402   58626 notify.go:220] Checking for updates...
	I0924 00:55:00.316427   58626 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:55:00.317730   58626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:55:00.319458   58626 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:55:00.321044   58626 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:55:00.322526   58626 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:55:00.323812   58626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:55:00.325859   58626 config.go:182] Loaded profile config "cert-expiration-811247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:55:00.326015   58626 config.go:182] Loaded profile config "kubernetes-upgrade-619300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:55:00.326185   58626 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 00:55:00.326298   58626 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:55:00.363326   58626 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 00:55:00.364875   58626 start.go:297] selected driver: kvm2
	I0924 00:55:00.364898   58626 start.go:901] validating driver "kvm2" against <nil>
	I0924 00:55:00.364923   58626 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:55:00.366025   58626 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.366116   58626 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:55:00.382040   58626 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:55:00.382099   58626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 00:55:00.382433   58626 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:55:00.382479   58626 cni.go:84] Creating CNI manager for ""
	I0924 00:55:00.382551   58626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:55:00.382562   58626 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 00:55:00.382627   58626 start.go:340] cluster config:
	{Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:55:00.382779   58626 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.384961   58626 out.go:177] * Starting "no-preload-674057" primary control-plane node in "no-preload-674057" cluster
	I0924 00:54:57.997755   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:57.998281   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:57.998304   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:57.998237   58277 retry.go:31] will retry after 3.751754598s: waiting for machine to come up
	I0924 00:54:58.436241   57735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:54:58.936617   57735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:54:58.996894   57735 api_server.go:72] duration metric: took 1.061018241s to wait for apiserver process to appear ...
	I0924 00:54:58.996939   57735 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:54:58.996963   57735 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I0924 00:55:01.511162   57735 api_server.go:279] https://192.168.39.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 00:55:01.511190   57735 api_server.go:103] status: https://192.168.39.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 00:55:01.511204   57735 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I0924 00:55:01.547944   57735 api_server.go:279] https://192.168.39.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 00:55:01.547978   57735 api_server.go:103] status: https://192.168.39.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 00:55:01.997423   57735 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I0924 00:55:02.006400   57735 api_server.go:279] https://192.168.39.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 00:55:02.006430   57735 api_server.go:103] status: https://192.168.39.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 00:55:02.497984   57735 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I0924 00:55:02.503723   57735 api_server.go:279] https://192.168.39.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 00:55:02.503759   57735 api_server.go:103] status: https://192.168.39.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 00:55:02.997247   57735 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I0924 00:55:03.015442   57735 api_server.go:279] https://192.168.39.119:8443/healthz returned 200:
	ok
	I0924 00:55:03.031808   57735 api_server.go:141] control plane version: v1.31.1
	I0924 00:55:03.031839   57735 api_server.go:131] duration metric: took 4.034892696s to wait for apiserver health ...
	I0924 00:55:03.031849   57735 cni.go:84] Creating CNI manager for ""
	I0924 00:55:03.031858   57735 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:55:03.033707   57735 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 00:55:03.035376   57735 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 00:55:03.048684   57735 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 00:55:03.071398   57735 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:55:03.071490   57735 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 00:55:03.071524   57735 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 00:55:03.085061   57735 system_pods.go:59] 8 kube-system pods found
	I0924 00:55:03.085100   57735 system_pods.go:61] "coredns-7c65d6cfc9-phml9" [5ddd2b13-15ef-402b-93fa-ba865ef417fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 00:55:03.085111   57735 system_pods.go:61] "coredns-7c65d6cfc9-z7sdk" [d4340845-205d-4d4f-af46-5bf33dd4363c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 00:55:03.085121   57735 system_pods.go:61] "etcd-kubernetes-upgrade-619300" [88097a50-69a4-4b42-93de-142476417046] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 00:55:03.085132   57735 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-619300" [ae3ab570-0600-4e0f-8518-14cec3bce514] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 00:55:03.085145   57735 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-619300" [a7de29cb-7452-46a7-94fb-110fa40172a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 00:55:03.085152   57735 system_pods.go:61] "kube-proxy-hjktr" [2ac02a34-1f6b-4743-8b33-645bd8cf8cb7] Running
	I0924 00:55:03.085169   57735 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-619300" [be652158-35a8-46e4-8c95-24064b8bdf20] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 00:55:03.085177   57735 system_pods.go:61] "storage-provisioner" [fa0b89ed-02d7-45c0-a9a3-277014540615] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 00:55:03.085187   57735 system_pods.go:74] duration metric: took 13.768966ms to wait for pod list to return data ...
	I0924 00:55:03.085196   57735 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:55:03.090821   57735 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:55:03.090847   57735 node_conditions.go:123] node cpu capacity is 2
	I0924 00:55:03.090856   57735 node_conditions.go:105] duration metric: took 5.656375ms to run NodePressure ...
	I0924 00:55:03.090872   57735 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 00:55:00.386252   58626 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 00:55:00.386418   58626 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 00:55:00.386462   58626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json: {Name:mk92392d5d3fdfb2e304626466461228cd7fae3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:00.386510   58626 cache.go:107] acquiring lock: {Name:mkb398bbb35b35a63c08a16ae97b207147267655 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386502   58626 cache.go:107] acquiring lock: {Name:mk804d14c4f4815044ef6df304d8838f76674d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386529   58626 cache.go:107] acquiring lock: {Name:mkacc0ec5d2dacc8fb5853f2a49c236a0734b52c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386633   58626 cache.go:115] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0924 00:55:00.386650   58626 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 164.026µs
	I0924 00:55:00.386664   58626 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0924 00:55:00.386662   58626 start.go:360] acquireMachinesLock for no-preload-674057: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:55:00.386661   58626 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 00:55:00.386669   58626 cache.go:107] acquiring lock: {Name:mk58a42ad8bd6f992b606af37c1332f1eec825ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386703   58626 cache.go:107] acquiring lock: {Name:mk61cbf1b2c663e874f6ee37b6c4306ad4106f7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386710   58626 cache.go:107] acquiring lock: {Name:mk69ae0f7e0d4af266653d4a68395623eabfd44e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386761   58626 cache.go:107] acquiring lock: {Name:mk65ea0ea570575ddb3bd649e5844d84c924364b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386683   58626 cache.go:107] acquiring lock: {Name:mk1058c56bd1c65d8494b9131217471355b45f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:55:00.386798   58626 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 00:55:00.386828   58626 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 00:55:00.386692   58626 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 00:55:00.386796   58626 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 00:55:00.386972   58626 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 00:55:00.387111   58626 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 00:55:00.388158   58626 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 00:55:00.388181   58626 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 00:55:00.388264   58626 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 00:55:00.388405   58626 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 00:55:00.388469   58626 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 00:55:00.388564   58626 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 00:55:00.388621   58626 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 00:55:00.635648   58626 cache.go:162] opening:  /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 00:55:00.646685   58626 cache.go:162] opening:  /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0924 00:55:00.667508   58626 cache.go:162] opening:  /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 00:55:00.671358   58626 cache.go:162] opening:  /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 00:55:00.673368   58626 cache.go:162] opening:  /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 00:55:00.694502   58626 cache.go:162] opening:  /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 00:55:00.708680   58626 cache.go:162] opening:  /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 00:55:00.777959   58626 cache.go:157] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0924 00:55:00.777992   58626 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 391.26872ms
	I0924 00:55:00.778011   58626 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0924 00:55:01.193692   58626 cache.go:157] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0924 00:55:01.193717   58626 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 807.206933ms
	I0924 00:55:01.193727   58626 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0924 00:55:02.651033   58626 cache.go:157] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0924 00:55:02.651064   58626 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.264355686s
	I0924 00:55:02.651075   58626 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0924 00:55:02.959096   58626 cache.go:157] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0924 00:55:02.959178   58626 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.57249646s
	I0924 00:55:02.959206   58626 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0924 00:55:03.036410   58626 cache.go:157] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0924 00:55:03.036438   58626 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 2.649733957s
	I0924 00:55:03.036452   58626 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0924 00:55:03.053774   58626 cache.go:157] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0924 00:55:03.053798   58626 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.667151751s
	I0924 00:55:03.053809   58626 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0924 00:55:03.259271   58626 cache.go:157] /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0924 00:55:03.259311   58626 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.872803917s
	I0924 00:55:03.259329   58626 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0924 00:55:03.259353   58626 cache.go:87] Successfully saved all images to host disk.
	I0924 00:55:03.293243   58626 start.go:364] duration metric: took 2.906558299s to acquireMachinesLock for "no-preload-674057"
	I0924 00:55:03.293370   58626 start.go:93] Provisioning new machine with config: &{Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:55:03.293543   58626 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 00:55:03.420576   57735 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 00:55:03.433143   57735 ops.go:34] apiserver oom_adj: -16
	I0924 00:55:03.433162   57735 kubeadm.go:597] duration metric: took 18.271719052s to restartPrimaryControlPlane
	I0924 00:55:03.433172   57735 kubeadm.go:394] duration metric: took 18.59255562s to StartCluster
	I0924 00:55:03.433191   57735 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:03.433265   57735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:55:03.434495   57735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:03.434800   57735 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.119 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:55:03.434901   57735 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 00:55:03.435004   57735 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-619300"
	I0924 00:55:03.435027   57735 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-619300"
	W0924 00:55:03.435038   57735 addons.go:243] addon storage-provisioner should already be in state true
	I0924 00:55:03.435041   57735 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-619300"
	I0924 00:55:03.435066   57735 config.go:182] Loaded profile config "kubernetes-upgrade-619300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:55:03.435075   57735 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-619300"
	I0924 00:55:03.435072   57735 host.go:66] Checking if "kubernetes-upgrade-619300" exists ...
	I0924 00:55:03.435520   57735 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:55:03.435571   57735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:55:03.435625   57735 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:55:03.435668   57735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:55:03.437020   57735 out.go:177] * Verifying Kubernetes components...
	I0924 00:55:03.438655   57735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:55:03.453443   57735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0924 00:55:03.453720   57735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0924 00:55:03.454280   57735 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:55:03.454302   57735 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:55:03.454795   57735 main.go:141] libmachine: Using API Version  1
	I0924 00:55:03.454803   57735 main.go:141] libmachine: Using API Version  1
	I0924 00:55:03.454816   57735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:55:03.454818   57735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:55:03.455259   57735 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:55:03.455473   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetState
	I0924 00:55:03.455668   57735 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:55:03.456178   57735 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:55:03.456220   57735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:55:03.458218   57735 kapi.go:59] client config for kubernetes-upgrade-619300: &rest.Config{Host:"https://192.168.39.119:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.crt", KeyFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kubernetes-upgrade-619300/client.key", CAFile:"/home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 00:55:03.458442   57735 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-619300"
	W0924 00:55:03.458455   57735 addons.go:243] addon default-storageclass should already be in state true
	I0924 00:55:03.458476   57735 host.go:66] Checking if "kubernetes-upgrade-619300" exists ...
	I0924 00:55:03.458701   57735 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:55:03.458727   57735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:55:03.474232   57735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0924 00:55:03.474909   57735 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:55:03.475553   57735 main.go:141] libmachine: Using API Version  1
	I0924 00:55:03.475571   57735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:55:03.476170   57735 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:55:03.476802   57735 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:55:03.476837   57735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:55:03.478526   57735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40835
	I0924 00:55:03.479056   57735 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:55:03.479645   57735 main.go:141] libmachine: Using API Version  1
	I0924 00:55:03.479662   57735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:55:03.480321   57735 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:55:03.480568   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetState
	I0924 00:55:03.482532   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:55:03.484450   57735 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:55:03.486309   57735 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:55:03.486322   57735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 00:55:03.486337   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:55:03.490438   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:55:03.490934   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:55:03.490955   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:55:03.491150   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:55:03.491313   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:55:03.491428   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:55:03.491617   57735 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa Username:docker}
	I0924 00:55:03.500165   57735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0924 00:55:03.500751   57735 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:55:03.501373   57735 main.go:141] libmachine: Using API Version  1
	I0924 00:55:03.501389   57735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:55:03.501815   57735 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:55:03.502048   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetState
	I0924 00:55:03.504168   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .DriverName
	I0924 00:55:03.504484   57735 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 00:55:03.504503   57735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 00:55:03.504523   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHHostname
	I0924 00:55:03.508007   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:55:03.508548   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:81:fa", ip: ""} in network mk-kubernetes-upgrade-619300: {Iface:virbr1 ExpiryTime:2024-09-24 01:49:07 +0000 UTC Type:0 Mac:52:54:00:b6:81:fa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:kubernetes-upgrade-619300 Clientid:01:52:54:00:b6:81:fa}
	I0924 00:55:03.508573   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | domain kubernetes-upgrade-619300 has defined IP address 192.168.39.119 and MAC address 52:54:00:b6:81:fa in network mk-kubernetes-upgrade-619300
	I0924 00:55:03.508783   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHPort
	I0924 00:55:03.508934   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHKeyPath
	I0924 00:55:03.509047   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .GetSSHUsername
	I0924 00:55:03.509167   57735 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kubernetes-upgrade-619300/id_rsa Username:docker}
	I0924 00:55:03.667258   57735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:55:03.689541   57735 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:55:03.689629   57735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:55:03.706273   57735 api_server.go:72] duration metric: took 271.436361ms to wait for apiserver process to appear ...
	I0924 00:55:03.706305   57735 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:55:03.706330   57735 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I0924 00:55:03.713096   57735 api_server.go:279] https://192.168.39.119:8443/healthz returned 200:
	ok
	I0924 00:55:03.715402   57735 api_server.go:141] control plane version: v1.31.1
	I0924 00:55:03.715425   57735 api_server.go:131] duration metric: took 9.112121ms to wait for apiserver health ...
	I0924 00:55:03.715436   57735 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:55:03.725311   57735 system_pods.go:59] 8 kube-system pods found
	I0924 00:55:03.725349   57735 system_pods.go:61] "coredns-7c65d6cfc9-phml9" [5ddd2b13-15ef-402b-93fa-ba865ef417fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 00:55:03.725360   57735 system_pods.go:61] "coredns-7c65d6cfc9-z7sdk" [d4340845-205d-4d4f-af46-5bf33dd4363c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 00:55:03.725372   57735 system_pods.go:61] "etcd-kubernetes-upgrade-619300" [88097a50-69a4-4b42-93de-142476417046] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 00:55:03.725381   57735 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-619300" [ae3ab570-0600-4e0f-8518-14cec3bce514] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 00:55:03.725391   57735 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-619300" [a7de29cb-7452-46a7-94fb-110fa40172a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 00:55:03.725397   57735 system_pods.go:61] "kube-proxy-hjktr" [2ac02a34-1f6b-4743-8b33-645bd8cf8cb7] Running
	I0924 00:55:03.725405   57735 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-619300" [be652158-35a8-46e4-8c95-24064b8bdf20] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 00:55:03.725416   57735 system_pods.go:61] "storage-provisioner" [fa0b89ed-02d7-45c0-a9a3-277014540615] Running
	I0924 00:55:03.725427   57735 system_pods.go:74] duration metric: took 9.983464ms to wait for pod list to return data ...
	I0924 00:55:03.725439   57735 kubeadm.go:582] duration metric: took 290.606649ms to wait for: map[apiserver:true system_pods:true]
	I0924 00:55:03.725454   57735 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:55:03.730492   57735 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 00:55:03.730541   57735 node_conditions.go:123] node cpu capacity is 2
	I0924 00:55:03.730554   57735 node_conditions.go:105] duration metric: took 5.094051ms to run NodePressure ...
	I0924 00:55:03.730567   57735 start.go:241] waiting for startup goroutines ...
	I0924 00:55:03.876082   57735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:55:03.882319   57735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 00:55:04.788974   57735 main.go:141] libmachine: Making call to close driver server
	I0924 00:55:04.789000   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .Close
	I0924 00:55:04.789096   57735 main.go:141] libmachine: Making call to close driver server
	I0924 00:55:04.789137   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .Close
	I0924 00:55:04.789312   57735 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:55:04.789328   57735 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:55:04.789337   57735 main.go:141] libmachine: Making call to close driver server
	I0924 00:55:04.789345   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .Close
	I0924 00:55:04.789435   57735 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:55:04.789474   57735 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:55:04.789539   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Closing plugin on server side
	I0924 00:55:04.789609   57735 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:55:04.789621   57735 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:55:04.789621   57735 main.go:141] libmachine: Making call to close driver server
	I0924 00:55:04.789634   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .Close
	I0924 00:55:04.791458   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) DBG | Closing plugin on server side
	I0924 00:55:04.791472   57735 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:55:04.791487   57735 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:55:04.802408   57735 main.go:141] libmachine: Making call to close driver server
	I0924 00:55:04.802436   57735 main.go:141] libmachine: (kubernetes-upgrade-619300) Calling .Close
	I0924 00:55:04.802707   57735 main.go:141] libmachine: Successfully made call to close driver server
	I0924 00:55:04.802726   57735 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 00:55:04.805035   57735 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0924 00:55:04.806476   57735 addons.go:510] duration metric: took 1.371577001s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0924 00:55:04.806537   57735 start.go:246] waiting for cluster config update ...
	I0924 00:55:04.806561   57735 start.go:255] writing updated cluster config ...
	I0924 00:55:04.806870   57735 ssh_runner.go:195] Run: rm -f paused
	I0924 00:55:04.871939   57735 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 00:55:04.874169   57735 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-619300" cluster and "default" namespace by default
	I0924 00:55:01.752198   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.752881   58197 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 00:55:01.752902   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.752908   58197 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 00:55:01.753279   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598
	I0924 00:55:01.840663   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 00:55:01.840687   58197 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 00:55:01.840699   58197 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 00:55:01.844238   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.844709   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:01.844740   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.844883   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 00:55:01.844907   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 00:55:01.844940   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:55:01.844965   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 00:55:01.844993   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 00:55:01.968726   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 00:55:01.968961   58197 main.go:141] libmachine: (old-k8s-version-171598) KVM machine creation complete!
	I0924 00:55:01.969312   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 00:55:01.969827   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:01.970085   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:01.970255   58197 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:55:01.970268   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 00:55:01.971910   58197 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:55:01.971926   58197 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:55:01.971950   58197 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:55:01.971959   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:01.974276   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.974699   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:01.974732   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.974853   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:01.975061   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:01.975230   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:01.975518   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:01.975694   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:01.975872   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:01.975885   58197 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:55:02.079653   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:55:02.079686   58197 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:55:02.079694   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.083048   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.083491   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.083528   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.083737   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.083980   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.084159   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.084367   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.084570   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.084797   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.084813   58197 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:55:02.189230   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:55:02.189322   58197 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:55:02.189330   58197 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:55:02.189338   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 00:55:02.189638   58197 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 00:55:02.189666   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 00:55:02.189901   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.192761   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.193213   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.193244   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.193388   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.193645   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.193838   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.194063   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.194270   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.194480   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.194498   58197 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 00:55:02.314997   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 00:55:02.315032   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.318138   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.318555   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.318599   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.318822   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.319087   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.319296   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.319455   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.319665   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.319874   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.319891   58197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:55:02.429171   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:55:02.429200   58197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:55:02.429247   58197 buildroot.go:174] setting up certificates
	I0924 00:55:02.429262   58197 provision.go:84] configureAuth start
	I0924 00:55:02.429277   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 00:55:02.429579   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 00:55:02.432395   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.432823   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.432850   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.433101   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.435504   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.435869   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.435890   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.436017   58197 provision.go:143] copyHostCerts
	I0924 00:55:02.436086   58197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:55:02.436105   58197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:55:02.436164   58197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:55:02.436305   58197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:55:02.436318   58197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:55:02.436367   58197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:55:02.436475   58197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:55:02.436485   58197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:55:02.436511   58197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:55:02.436610   58197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 00:55:02.528742   58197 provision.go:177] copyRemoteCerts
	I0924 00:55:02.528805   58197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:55:02.528907   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.532302   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.532792   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.532823   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.533033   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.533224   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.533448   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.533632   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:02.620500   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:55:02.655188   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 00:55:02.685047   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:55:02.715935   58197 provision.go:87] duration metric: took 286.660949ms to configureAuth
	I0924 00:55:02.715966   58197 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:55:02.716128   58197 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 00:55:02.716205   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.719908   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.720247   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.720274   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.720543   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.720723   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.720836   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.721001   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.721165   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.721376   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.721403   58197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:55:03.018082   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:55:03.018107   58197 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:55:03.018115   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetURL
	I0924 00:55:03.020121   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using libvirt version 6000000
	I0924 00:55:03.023905   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.024403   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.024437   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.024783   58197 main.go:141] libmachine: Docker is up and running!
	I0924 00:55:03.024800   58197 main.go:141] libmachine: Reticulating splines...
	I0924 00:55:03.024807   58197 client.go:171] duration metric: took 22.439787669s to LocalClient.Create
	I0924 00:55:03.024826   58197 start.go:167] duration metric: took 22.439855123s to libmachine.API.Create "old-k8s-version-171598"
	I0924 00:55:03.024834   58197 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 00:55:03.024843   58197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:55:03.024857   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.025082   58197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:55:03.025114   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.027876   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.028199   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.028229   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.028443   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.028598   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.028727   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.028874   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:03.120687   58197 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:55:03.124786   58197 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:55:03.124817   58197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:55:03.124903   58197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:55:03.125015   58197 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:55:03.125258   58197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:55:03.135695   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:55:03.165018   58197 start.go:296] duration metric: took 140.171374ms for postStartSetup
	I0924 00:55:03.165070   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 00:55:03.165759   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 00:55:03.169024   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.169673   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.169702   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.170098   58197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 00:55:03.170347   58197 start.go:128] duration metric: took 22.904687506s to createHost
	I0924 00:55:03.170378   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.173735   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.174051   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.174080   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.174389   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.174585   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.174884   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.175094   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.175291   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:03.175483   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:03.175496   58197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:55:03.293095   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139303.249080601
	
	I0924 00:55:03.293129   58197 fix.go:216] guest clock: 1727139303.249080601
	I0924 00:55:03.293136   58197 fix.go:229] Guest: 2024-09-24 00:55:03.249080601 +0000 UTC Remote: 2024-09-24 00:55:03.170363849 +0000 UTC m=+27.555541393 (delta=78.716752ms)
	I0924 00:55:03.293158   58197 fix.go:200] guest clock delta is within tolerance: 78.716752ms
	I0924 00:55:03.293164   58197 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 23.027668037s
	I0924 00:55:03.293198   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.293446   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 00:55:03.296716   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.297182   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.297214   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.297416   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.297967   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.298174   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.298287   58197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:55:03.298323   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.298447   58197 ssh_runner.go:195] Run: cat /version.json
	I0924 00:55:03.298475   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.301444   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.301611   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.301843   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.301867   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.302017   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.302108   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.302128   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.302382   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.302396   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.302586   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.302585   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.302796   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.302794   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:03.302952   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:03.386649   58197 ssh_runner.go:195] Run: systemctl --version
	I0924 00:55:03.424758   58197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:55:03.614118   58197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:55:03.621530   58197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:55:03.621626   58197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:55:03.644213   58197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:55:03.644235   58197 start.go:495] detecting cgroup driver to use...
	I0924 00:55:03.644313   58197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:55:03.668161   58197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:55:03.694828   58197 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:55:03.694904   58197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:55:03.713736   58197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:55:03.730803   58197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:55:03.899461   58197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:55:04.063545   58197 docker.go:233] disabling docker service ...
	I0924 00:55:04.063639   58197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:55:04.083164   58197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:55:04.097906   58197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:55:04.269668   58197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:55:04.402316   58197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:55:04.418705   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:55:04.440370   58197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 00:55:04.440437   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.451665   58197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:55:04.451754   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.462760   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.475972   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.490841   58197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:55:04.504910   58197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:55:04.517644   58197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:55:04.517717   58197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:55:04.531792   58197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:55:04.543238   58197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:55:04.662287   58197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:55:04.779583   58197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:55:04.779661   58197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:55:04.785323   58197 start.go:563] Will wait 60s for crictl version
	I0924 00:55:04.785424   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:04.790385   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:55:04.841899   58197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:55:04.841986   58197 ssh_runner.go:195] Run: crio --version
	I0924 00:55:04.877618   58197 ssh_runner.go:195] Run: crio --version
	I0924 00:55:04.919522   58197 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	
	
	==> CRI-O <==
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.869962194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b36d961c-dad7-43a0-87d7-b057914158f6 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.871402819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70b5dec4-bbc6-41c1-9eb1-ebec3188cb5a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.871920366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727139305871888592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70b5dec4-bbc6-41c1-9eb1-ebec3188cb5a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.872797987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=474e83f0-4746-404a-9ddc-9063609eaab4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.872876481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=474e83f0-4746-404a-9ddc-9063609eaab4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.873612585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b829c0719b64df2efe2782cba82410661859b74a5c337f63b15bba4fa4822b3,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302109407161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6a55a84fed6f52e113868969946a96e3332dbeccbf6f3475fa98736eba8c50,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302130968566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b502d6812744ae5aef42bac1a46b405faea491a7d829daa4bbe91337594db4,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727139302110715822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4f5794df9d93d91e946891a93f047e4916f593ad77bea82a26aafba4269d45,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1727139298343109280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd75287063fe5b71fe7b741c04ba163b42c31f388dd7a84d0b8d9100f5f903b1,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Cr
eatedAt:1727139298289141784,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1994ff5cede57a4ea1ddb9805b4236c0b187d2b4b2da16111d7dce0a9f0224b5,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139
298291418091,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76b35a60f5c06a99cf566e59bcbd8fd9a1e1648c80ecc3cebde44d6c21f24bd,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139298260528624,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b27a87a2dc17f3a539309884c853f99418ac4b629b4f73bf4a3f1ccc6b7d663,PodSandboxId:3d1b6fd9cef01aa7c229af2bc365284787de2bb0945b65f1e7f4bcfaadec8f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17271392839742
28118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e685db9fc36a9f9ce0bda5750174f71bcc210ccb9e17a110d317971eff0d0e58,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139284955856215,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12472f694bd14c4a4fb6634ed6710090c7158df84a451925ea3c7fd759ed07f3,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139285081423014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41cceebe8b6802b592a7b58833d203d2964a377b6aef45541961bbcfe6bfa100,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b
8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139283871702506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20777011a72e70ba919a4881774976eb696478952272908ab22a3d12f0113ca5,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5
d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139283769655468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cb509fe10f9d365c1b67f2d5fb6bc55cf044a470a5612e7b6dba1a06fdc30b,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727139283709254286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d22beba76fcc66acfcb4ce98155813232d1428822cf839666135473fbe7e3be5,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727139283569785644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab4602e62c3dd9824a9eb13d4f91b047b48f1853a1937f1281ada38d22ca917,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727139283501062996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffacf363706158dc16ea31bac4c0f2e2e027c543bf6f6b311768c1231c96be14,PodSandboxId:b83d34e58cb8adbf4c6490af328fc71cf5cf624615da2aa1955b5e97c4634772,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727139251601944185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=474e83f0-4746-404a-9ddc-9063609eaab4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.932206782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16178623-b26d-4793-b371-577e3d900a4c name=/runtime.v1.RuntimeService/Version
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.933838065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16178623-b26d-4793-b371-577e3d900a4c name=/runtime.v1.RuntimeService/Version
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.935733213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee4d7cab-d322-4bf0-8d12-aa0640a53bf0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.936295765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727139305936262345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee4d7cab-d322-4bf0-8d12-aa0640a53bf0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.936996722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3e46ca8-b3b6-44ee-9b2e-929e9bca4a2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.937091960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3e46ca8-b3b6-44ee-9b2e-929e9bca4a2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:05 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:05.937936167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b829c0719b64df2efe2782cba82410661859b74a5c337f63b15bba4fa4822b3,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302109407161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6a55a84fed6f52e113868969946a96e3332dbeccbf6f3475fa98736eba8c50,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302130968566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b502d6812744ae5aef42bac1a46b405faea491a7d829daa4bbe91337594db4,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727139302110715822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4f5794df9d93d91e946891a93f047e4916f593ad77bea82a26aafba4269d45,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1727139298343109280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd75287063fe5b71fe7b741c04ba163b42c31f388dd7a84d0b8d9100f5f903b1,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Cr
eatedAt:1727139298289141784,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1994ff5cede57a4ea1ddb9805b4236c0b187d2b4b2da16111d7dce0a9f0224b5,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139
298291418091,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76b35a60f5c06a99cf566e59bcbd8fd9a1e1648c80ecc3cebde44d6c21f24bd,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139298260528624,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b27a87a2dc17f3a539309884c853f99418ac4b629b4f73bf4a3f1ccc6b7d663,PodSandboxId:3d1b6fd9cef01aa7c229af2bc365284787de2bb0945b65f1e7f4bcfaadec8f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17271392839742
28118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e685db9fc36a9f9ce0bda5750174f71bcc210ccb9e17a110d317971eff0d0e58,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139284955856215,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12472f694bd14c4a4fb6634ed6710090c7158df84a451925ea3c7fd759ed07f3,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139285081423014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41cceebe8b6802b592a7b58833d203d2964a377b6aef45541961bbcfe6bfa100,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b
8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139283871702506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20777011a72e70ba919a4881774976eb696478952272908ab22a3d12f0113ca5,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5
d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139283769655468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cb509fe10f9d365c1b67f2d5fb6bc55cf044a470a5612e7b6dba1a06fdc30b,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727139283709254286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d22beba76fcc66acfcb4ce98155813232d1428822cf839666135473fbe7e3be5,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727139283569785644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab4602e62c3dd9824a9eb13d4f91b047b48f1853a1937f1281ada38d22ca917,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727139283501062996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffacf363706158dc16ea31bac4c0f2e2e027c543bf6f6b311768c1231c96be14,PodSandboxId:b83d34e58cb8adbf4c6490af328fc71cf5cf624615da2aa1955b5e97c4634772,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727139251601944185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3e46ca8-b3b6-44ee-9b2e-929e9bca4a2d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.006892214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc8e68de-f82f-48d3-9fa4-f67f739473d3 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.007014998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc8e68de-f82f-48d3-9fa4-f67f739473d3 name=/runtime.v1.RuntimeService/Version
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.009396983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1942186f-2095-43c9-ae62-ab9f4d261a8c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.010390810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727139306010311872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1942186f-2095-43c9-ae62-ab9f4d261a8c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.011596660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6f3b31b-3353-4dc3-a8cd-d0e4fa669785 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.011868920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6f3b31b-3353-4dc3-a8cd-d0e4fa669785 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.012958368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b829c0719b64df2efe2782cba82410661859b74a5c337f63b15bba4fa4822b3,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302109407161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6a55a84fed6f52e113868969946a96e3332dbeccbf6f3475fa98736eba8c50,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302130968566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b502d6812744ae5aef42bac1a46b405faea491a7d829daa4bbe91337594db4,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727139302110715822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4f5794df9d93d91e946891a93f047e4916f593ad77bea82a26aafba4269d45,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1727139298343109280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd75287063fe5b71fe7b741c04ba163b42c31f388dd7a84d0b8d9100f5f903b1,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Cr
eatedAt:1727139298289141784,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1994ff5cede57a4ea1ddb9805b4236c0b187d2b4b2da16111d7dce0a9f0224b5,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139
298291418091,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76b35a60f5c06a99cf566e59bcbd8fd9a1e1648c80ecc3cebde44d6c21f24bd,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139298260528624,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b27a87a2dc17f3a539309884c853f99418ac4b629b4f73bf4a3f1ccc6b7d663,PodSandboxId:3d1b6fd9cef01aa7c229af2bc365284787de2bb0945b65f1e7f4bcfaadec8f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17271392839742
28118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e685db9fc36a9f9ce0bda5750174f71bcc210ccb9e17a110d317971eff0d0e58,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139284955856215,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12472f694bd14c4a4fb6634ed6710090c7158df84a451925ea3c7fd759ed07f3,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139285081423014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41cceebe8b6802b592a7b58833d203d2964a377b6aef45541961bbcfe6bfa100,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b
8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139283871702506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20777011a72e70ba919a4881774976eb696478952272908ab22a3d12f0113ca5,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5
d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139283769655468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cb509fe10f9d365c1b67f2d5fb6bc55cf044a470a5612e7b6dba1a06fdc30b,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727139283709254286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d22beba76fcc66acfcb4ce98155813232d1428822cf839666135473fbe7e3be5,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727139283569785644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab4602e62c3dd9824a9eb13d4f91b047b48f1853a1937f1281ada38d22ca917,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727139283501062996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffacf363706158dc16ea31bac4c0f2e2e027c543bf6f6b311768c1231c96be14,PodSandboxId:b83d34e58cb8adbf4c6490af328fc71cf5cf624615da2aa1955b5e97c4634772,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727139251601944185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6f3b31b-3353-4dc3-a8cd-d0e4fa669785 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.038414302Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=60781337-9080-4de0-9aaf-faa37b7cd63c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.038772770Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-z7sdk,Uid:d4340845-205d-4d4f-af46-5bf33dd4363c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727139283626360897,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T00:54:10.908765954Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-phml9,Uid:5ddd2b13-15ef-402b-93fa-ba865ef417fc,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727139283482243200,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T00:54:10.884960671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d1b6fd9cef01aa7c229af2bc365284787de2bb0945b65f1e7f4bcfaadec8f9c,Metadata:&PodSandboxMetadata{Name:kube-proxy-hjktr,Uid:2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727139283255003321,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,k8s-app: kube-proxy,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2024-09-24T00:54:11.089140223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b8326518fddd76a05b6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fa0b89ed-02d7-45c0-a9a3-277014540615,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727139283189139518,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"conta
iners\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-24T00:54:10.527524724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-619300,Uid:b32f80b0280027e9a92d95963fe12ad9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727139283183904046,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: b32f80b0280027e9a92d95963fe12ad9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b32f80b0280027e9a92d95963fe12ad9,kubernetes.io/config.seen: 2024-09-24T00:53:59.801830499Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-619300,Uid:0818d95a42d5a80eabc83bb6906fb5cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727139283176713014,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0818d95a42d5a80eabc83bb6906fb5cf,kubernetes.io/config.seen: 2024-09-24T00:53:59.801831732Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2
94d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5d7105,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-619300,Uid:af7e6ba043c8b52b790af0c5a5d294c2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727139283171071578,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.119:8443,kubernetes.io/config.hash: af7e6ba043c8b52b790af0c5a5d294c2,kubernetes.io/config.seen: 2024-09-24T00:53:59.801825025Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-619300,Uid:237f90e374c8186a158e9f906c27b043,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1727139283163895695,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.119:2379,kubernetes.io/config.hash: 237f90e374c8186a158e9f906c27b043,kubernetes.io/config.seen: 2024-09-24T00:53:59.888118430Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b83d34e58cb8adbf4c6490af328fc71cf5cf624615da2aa1955b5e97c4634772,Metadata:&PodSandboxMetadata{Name:kube-proxy-hjktr,Uid:2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727139251395106964,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T00:54:11.089140223Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=60781337-9080-4de0-9aaf-faa37b7cd63c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.039658813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c889922e-e9d3-4a79-9e55-d2113d075f7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.039736033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c889922e-e9d3-4a79-9e55-d2113d075f7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 00:55:06 kubernetes-upgrade-619300 crio[2239]: time="2024-09-24 00:55:06.040095366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b829c0719b64df2efe2782cba82410661859b74a5c337f63b15bba4fa4822b3,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302109407161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6a55a84fed6f52e113868969946a96e3332dbeccbf6f3475fa98736eba8c50,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139302130968566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b502d6812744ae5aef42bac1a46b405faea491a7d829daa4bbe91337594db4,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1727139302110715822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4f5794df9d93d91e946891a93f047e4916f593ad77bea82a26aafba4269d45,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1727139298343109280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd75287063fe5b71fe7b741c04ba163b42c31f388dd7a84d0b8d9100f5f903b1,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Cr
eatedAt:1727139298289141784,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1994ff5cede57a4ea1ddb9805b4236c0b187d2b4b2da16111d7dce0a9f0224b5,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139
298291418091,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76b35a60f5c06a99cf566e59bcbd8fd9a1e1648c80ecc3cebde44d6c21f24bd,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139298260528624,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b27a87a2dc17f3a539309884c853f99418ac4b629b4f73bf4a3f1ccc6b7d663,PodSandboxId:3d1b6fd9cef01aa7c229af2bc365284787de2bb0945b65f1e7f4bcfaadec8f9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17271392839742
28118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e685db9fc36a9f9ce0bda5750174f71bcc210ccb9e17a110d317971eff0d0e58,PodSandboxId:cfae5b4a604c12c5aa8295c50d125547fe4774aee69e5f081c2a5e4df3b1caf4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139284955856215,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-phml9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddd2b13-15ef-402b-93fa-ba865ef417fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12472f694bd14c4a4fb6634ed6710090c7158df84a451925ea3c7fd759ed07f3,PodSandboxId:b2daa63675f0c99fb494ff798de3accf93adbd01d6b7f81ea820c5b65181eece,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727139285081423014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z7sdk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4340845-205d-4d4f-af46-5bf33dd4363c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41cceebe8b6802b592a7b58833d203d2964a377b6aef45541961bbcfe6bfa100,PodSandboxId:2f8ba5cf22cc6d198f15c66b03a40f90ea88c10bc1438b
8326518fddd76a05b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139283871702506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa0b89ed-02d7-45c0-a9a3-277014540615,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20777011a72e70ba919a4881774976eb696478952272908ab22a3d12f0113ca5,PodSandboxId:294d4d3c2955c9dab0dc71c9889b589c5e8b22be9cbbe20bab51ee3dba5
d7105,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139283769655468,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af7e6ba043c8b52b790af0c5a5d294c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81cb509fe10f9d365c1b67f2d5fb6bc55cf044a470a5612e7b6dba1a06fdc30b,PodSandboxId:b6d8a489ee78c77f28a21c1fe217330bdf555dde01a0ad9e0185f1ac29c8b729,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727139283709254286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 237f90e374c8186a158e9f906c27b043,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d22beba76fcc66acfcb4ce98155813232d1428822cf839666135473fbe7e3be5,PodSandboxId:d094668af20df2980d183f2b6a56c6e8da3e27666c72eb626982c3a905e1aa55,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727139283569785644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32f80b0280027e9a92d95963fe12ad9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab4602e62c3dd9824a9eb13d4f91b047b48f1853a1937f1281ada38d22ca917,PodSandboxId:2d5954e982a3b263041fbcb15a29236385384e92a0b27855f60bc1524e8c6d00,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727139283501062996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-619300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0818d95a42d5a80eabc83bb6906fb5cf,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffacf363706158dc16ea31bac4c0f2e2e027c543bf6f6b311768c1231c96be14,PodSandboxId:b83d34e58cb8adbf4c6490af328fc71cf5cf624615da2aa1955b5e97c4634772,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727139251601944185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjktr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac02a34-1f6b-4743-8b33-645bd8cf8cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c889922e-e9d3-4a79-9e55-d2113d075f7e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3b6a55a84fed6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   b2daa63675f0c       coredns-7c65d6cfc9-z7sdk
	72b502d681274       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   2f8ba5cf22cc6       storage-provisioner
	5b829c0719b64       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   2                   cfae5b4a604c1       coredns-7c65d6cfc9-phml9
	5b4f5794df9d9       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   2d5954e982a3b       kube-scheduler-kubernetes-upgrade-619300
	1994ff5cede57       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   b6d8a489ee78c       etcd-kubernetes-upgrade-619300
	bd75287063fe5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   294d4d3c2955c       kube-apiserver-kubernetes-upgrade-619300
	e76b35a60f5c0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   d094668af20df       kube-controller-manager-kubernetes-upgrade-619300
	12472f694bd14       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Exited              coredns                   1                   b2daa63675f0c       coredns-7c65d6cfc9-z7sdk
	e685db9fc36a9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Exited              coredns                   1                   cfae5b4a604c1       coredns-7c65d6cfc9-phml9
	0b27a87a2dc17       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   22 seconds ago      Running             kube-proxy                1                   3d1b6fd9cef01       kube-proxy-hjktr
	41cceebe8b680       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 seconds ago      Exited              storage-provisioner       1                   2f8ba5cf22cc6       storage-provisioner
	20777011a72e7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago      Exited              kube-apiserver            1                   294d4d3c2955c       kube-apiserver-kubernetes-upgrade-619300
	81cb509fe10f9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Exited              etcd                      1                   b6d8a489ee78c       etcd-kubernetes-upgrade-619300
	d22beba76fcc6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago      Exited              kube-controller-manager   1                   d094668af20df       kube-controller-manager-kubernetes-upgrade-619300
	eab4602e62c3d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   22 seconds ago      Exited              kube-scheduler            1                   2d5954e982a3b       kube-scheduler-kubernetes-upgrade-619300
	ffacf36370615       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   54 seconds ago      Exited              kube-proxy                0                   b83d34e58cb8a       kube-proxy-hjktr
	
	
	==> coredns [12472f694bd14c4a4fb6634ed6710090c7158df84a451925ea3c7fd759ed07f3] <==
	
	
	==> coredns [3b6a55a84fed6f52e113868969946a96e3332dbeccbf6f3475fa98736eba8c50] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [5b829c0719b64df2efe2782cba82410661859b74a5c337f63b15bba4fa4822b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e685db9fc36a9f9ce0bda5750174f71bcc210ccb9e17a110d317971eff0d0e58] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-619300
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-619300
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:54:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-619300
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:55:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:55:01 +0000   Tue, 24 Sep 2024 00:54:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:55:01 +0000   Tue, 24 Sep 2024 00:54:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:55:01 +0000   Tue, 24 Sep 2024 00:54:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:55:01 +0000   Tue, 24 Sep 2024 00:54:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    kubernetes-upgrade-619300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a2ed357f33e4615a14db78cf62a3a53
	  System UUID:                4a2ed357-f33e-4615-a14d-b78cf62a3a53
	  Boot ID:                    24d8c737-ecc2-4169-a31f-ba721a58ff38
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-phml9                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     56s
	  kube-system                 coredns-7c65d6cfc9-z7sdk                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     56s
	  kube-system                 etcd-kubernetes-upgrade-619300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         59s
	  kube-system                 kube-apiserver-kubernetes-upgrade-619300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-619300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-hjktr                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-kubernetes-upgrade-619300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node kubernetes-upgrade-619300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x7 over 67s)  kubelet          Node kubernetes-upgrade-619300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  67s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node kubernetes-upgrade-619300 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           56s                node-controller  Node kubernetes-upgrade-619300 event: Registered Node kubernetes-upgrade-619300 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-619300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-619300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-619300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-619300 event: Registered Node kubernetes-upgrade-619300 in Controller
	
	
	==> dmesg <==
	[  +1.579259] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.135185] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.064031] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072539] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.192292] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.142586] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.310593] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +4.383772] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +0.067672] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.138969] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[Sep24 00:54] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.095800] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.021863] kauditd_printk_skb: 100 callbacks suppressed
	[ +27.268322] systemd-fstab-generator[2165]: Ignoring "noauto" option for root device
	[  +0.162946] systemd-fstab-generator[2177]: Ignoring "noauto" option for root device
	[  +0.200074] systemd-fstab-generator[2191]: Ignoring "noauto" option for root device
	[  +0.151770] systemd-fstab-generator[2203]: Ignoring "noauto" option for root device
	[  +0.291693] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +1.483956] systemd-fstab-generator[2380]: Ignoring "noauto" option for root device
	[  +3.428796] kauditd_printk_skb: 228 callbacks suppressed
	[ +11.163619] systemd-fstab-generator[3447]: Ignoring "noauto" option for root device
	[Sep24 00:55] systemd-fstab-generator[3875]: Ignoring "noauto" option for root device
	[  +0.146248] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [1994ff5cede57a4ea1ddb9805b4236c0b187d2b4b2da16111d7dce0a9f0224b5] <==
	{"level":"info","ts":"2024-09-24T00:54:59.041005Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"989a2f5e6d47c652","local-member-id":"d018ee4f4eab0cee","added-peer-id":"d018ee4f4eab0cee","added-peer-peer-urls":["https://192.168.39.119:2380"]}
	{"level":"info","ts":"2024-09-24T00:54:59.041197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"989a2f5e6d47c652","local-member-id":"d018ee4f4eab0cee","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:54:59.043241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:54:59.043762Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:54:59.054824Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T00:54:59.054978Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.119:2380"}
	{"level":"info","ts":"2024-09-24T00:54:59.055020Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.119:2380"}
	{"level":"info","ts":"2024-09-24T00:54:59.055042Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d018ee4f4eab0cee","initial-advertise-peer-urls":["https://192.168.39.119:2380"],"listen-peer-urls":["https://192.168.39.119:2380"],"advertise-client-urls":["https://192.168.39.119:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.119:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T00:54:59.055078Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T00:55:00.193235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-24T00:55:00.193285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-24T00:55:00.193315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee received MsgPreVoteResp from d018ee4f4eab0cee at term 3"}
	{"level":"info","ts":"2024-09-24T00:55:00.193329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee became candidate at term 4"}
	{"level":"info","ts":"2024-09-24T00:55:00.193396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee received MsgVoteResp from d018ee4f4eab0cee at term 4"}
	{"level":"info","ts":"2024-09-24T00:55:00.193409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee became leader at term 4"}
	{"level":"info","ts":"2024-09-24T00:55:00.193419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d018ee4f4eab0cee elected leader d018ee4f4eab0cee at term 4"}
	{"level":"info","ts":"2024-09-24T00:55:00.200396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:55:00.200667Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:55:00.200392Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d018ee4f4eab0cee","local-member-attributes":"{Name:kubernetes-upgrade-619300 ClientURLs:[https://192.168.39.119:2379]}","request-path":"/0/members/d018ee4f4eab0cee/attributes","cluster-id":"989a2f5e6d47c652","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T00:55:00.201021Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T00:55:00.201088Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T00:55:00.201547Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:55:00.201863Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:55:00.202399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.119:2379"}
	{"level":"info","ts":"2024-09-24T00:55:00.203415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [81cb509fe10f9d365c1b67f2d5fb6bc55cf044a470a5612e7b6dba1a06fdc30b] <==
	{"level":"info","ts":"2024-09-24T00:54:45.459950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-24T00:54:45.459981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee received MsgPreVoteResp from d018ee4f4eab0cee at term 2"}
	{"level":"info","ts":"2024-09-24T00:54:45.460000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T00:54:45.460012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee received MsgVoteResp from d018ee4f4eab0cee at term 3"}
	{"level":"info","ts":"2024-09-24T00:54:45.460027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d018ee4f4eab0cee became leader at term 3"}
	{"level":"info","ts":"2024-09-24T00:54:45.460034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d018ee4f4eab0cee elected leader d018ee4f4eab0cee at term 3"}
	{"level":"info","ts":"2024-09-24T00:54:45.464507Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d018ee4f4eab0cee","local-member-attributes":"{Name:kubernetes-upgrade-619300 ClientURLs:[https://192.168.39.119:2379]}","request-path":"/0/members/d018ee4f4eab0cee/attributes","cluster-id":"989a2f5e6d47c652","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T00:54:45.464561Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:54:45.464924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:54:45.471036Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:54:45.476045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T00:54:45.477955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T00:54:45.481000Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T00:54:45.483079Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:54:45.488350Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.119:2379"}
	{"level":"info","ts":"2024-09-24T00:54:55.946001Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-24T00:54:55.946068Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-619300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.119:2380"],"advertise-client-urls":["https://192.168.39.119:2379"]}
	{"level":"warn","ts":"2024-09-24T00:54:55.946228Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:54:55.946278Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:54:55.947924Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.119:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T00:54:55.947974Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.119:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T00:54:55.948031Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d018ee4f4eab0cee","current-leader-member-id":"d018ee4f4eab0cee"}
	{"level":"info","ts":"2024-09-24T00:54:55.951974Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.119:2380"}
	{"level":"info","ts":"2024-09-24T00:54:55.952116Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.119:2380"}
	{"level":"info","ts":"2024-09-24T00:54:55.952132Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-619300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.119:2380"],"advertise-client-urls":["https://192.168.39.119:2379"]}
	
	
	==> kernel <==
	 00:55:06 up 1 min,  0 users,  load average: 1.42, 0.38, 0.13
	Linux kubernetes-upgrade-619300 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20777011a72e70ba919a4881774976eb696478952272908ab22a3d12f0113ca5] <==
	I0924 00:54:47.625564       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0924 00:54:47.628094       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0924 00:54:47.633222       1 controller.go:157] Shutting down quota evaluator
	I0924 00:54:47.633311       1 controller.go:176] quota evaluator worker shutdown
	I0924 00:54:47.633431       1 controller.go:176] quota evaluator worker shutdown
	I0924 00:54:47.633507       1 controller.go:176] quota evaluator worker shutdown
	I0924 00:54:47.633532       1 controller.go:176] quota evaluator worker shutdown
	I0924 00:54:47.633554       1 controller.go:176] quota evaluator worker shutdown
	I0924 00:54:47.634056       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0924 00:54:48.445481       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0924 00:54:48.446002       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E0924 00:54:49.446350       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:49.446467       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0924 00:54:50.445758       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0924 00:54:50.445781       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E0924 00:54:51.445710       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:51.445907       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0924 00:54:52.446088       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:52.446235       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0924 00:54:53.446838       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:53.447354       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0924 00:54:54.446058       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0924 00:54:54.446239       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E0924 00:54:55.446021       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:55.446260       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [bd75287063fe5b71fe7b741c04ba163b42c31f388dd7a84d0b8d9100f5f903b1] <==
	I0924 00:55:01.548567       1 aggregator.go:171] initial CRD sync complete...
	I0924 00:55:01.548594       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 00:55:01.548600       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 00:55:01.550465       1 shared_informer.go:320] Caches are synced for configmaps
	I0924 00:55:01.596013       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0924 00:55:01.601499       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 00:55:01.601544       1 policy_source.go:224] refreshing policies
	I0924 00:55:01.645058       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0924 00:55:01.645038       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0924 00:55:01.645253       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0924 00:55:01.645593       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 00:55:01.646059       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0924 00:55:01.646884       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0924 00:55:01.648853       1 cache.go:39] Caches are synced for autoregister controller
	I0924 00:55:01.650548       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0924 00:55:01.656227       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 00:55:02.269393       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 00:55:02.449818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0924 00:55:02.770221       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.119]
	I0924 00:55:02.778589       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 00:55:03.213777       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 00:55:03.228039       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 00:55:03.272899       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 00:55:03.395612       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 00:55:03.404367       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [d22beba76fcc66acfcb4ce98155813232d1428822cf839666135473fbe7e3be5] <==
	I0924 00:54:44.735972       1 serving.go:386] Generated self-signed cert in-memory
	I0924 00:54:46.056358       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0924 00:54:46.056464       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:54:46.058415       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0924 00:54:46.060927       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0924 00:54:46.061488       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0924 00:54:46.062022       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [e76b35a60f5c06a99cf566e59bcbd8fd9a1e1648c80ecc3cebde44d6c21f24bd] <==
	I0924 00:55:04.913562       1 shared_informer.go:320] Caches are synced for GC
	I0924 00:55:04.916355       1 shared_informer.go:320] Caches are synced for namespace
	I0924 00:55:04.916717       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.675005ms"
	I0924 00:55:04.917020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.323µs"
	I0924 00:55:04.920594       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0924 00:55:04.936135       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0924 00:55:04.936366       1 shared_informer.go:320] Caches are synced for deployment
	I0924 00:55:04.946892       1 shared_informer.go:320] Caches are synced for endpoint
	I0924 00:55:04.964783       1 shared_informer.go:320] Caches are synced for disruption
	I0924 00:55:05.024366       1 shared_informer.go:320] Caches are synced for daemon sets
	I0924 00:55:05.030847       1 shared_informer.go:320] Caches are synced for taint
	I0924 00:55:05.031252       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0924 00:55:05.031511       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-619300"
	I0924 00:55:05.031659       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0924 00:55:05.075745       1 shared_informer.go:320] Caches are synced for attach detach
	I0924 00:55:05.080228       1 shared_informer.go:320] Caches are synced for resource quota
	I0924 00:55:05.104548       1 shared_informer.go:320] Caches are synced for expand
	I0924 00:55:05.106989       1 shared_informer.go:320] Caches are synced for ephemeral
	I0924 00:55:05.111367       1 shared_informer.go:320] Caches are synced for persistent volume
	I0924 00:55:05.118375       1 shared_informer.go:320] Caches are synced for PVC protection
	I0924 00:55:05.150703       1 shared_informer.go:320] Caches are synced for stateful set
	I0924 00:55:05.160977       1 shared_informer.go:320] Caches are synced for resource quota
	I0924 00:55:05.553480       1 shared_informer.go:320] Caches are synced for garbage collector
	I0924 00:55:05.553524       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0924 00:55:05.568638       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [0b27a87a2dc17f3a539309884c853f99418ac4b629b4f73bf4a3f1ccc6b7d663] <==
	E0924 00:54:47.654486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-619300&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:47.654623       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:47.654692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:47.654791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:47.654886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:48.803667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:48.803735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:48.981855       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:48.982025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:49.163661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-619300&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:49.163755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-619300&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:51.023642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:51.023748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:51.735783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-619300&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:51.735941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-619300&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:52.078803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:52.078886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:56.669414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-619300&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:56.669463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-619300&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:57.084280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:57.084346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	W0924 00:54:57.294062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.119:8443: connect: connection refused
	E0924 00:54:57.294133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.119:8443: connect: connection refused" logger="UnhandledError"
	I0924 00:55:03.751716       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:55:06.252825       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ffacf363706158dc16ea31bac4c0f2e2e027c543bf6f6b311768c1231c96be14] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 00:54:12.038222       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 00:54:12.073926       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.119"]
	E0924 00:54:12.074014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:54:12.121539       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 00:54:12.121616       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 00:54:12.121652       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:54:12.126614       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:54:12.127446       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:54:12.127490       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:54:12.130549       1 config.go:199] "Starting service config controller"
	I0924 00:54:12.131061       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:54:12.131278       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:54:12.131319       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:54:12.132648       1 config.go:328] "Starting node config controller"
	I0924 00:54:12.132704       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:54:12.231631       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:54:12.231779       1 shared_informer.go:320] Caches are synced for service config
	I0924 00:54:12.233400       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b4f5794df9d93d91e946891a93f047e4916f593ad77bea82a26aafba4269d45] <==
	I0924 00:54:59.372786       1 serving.go:386] Generated self-signed cert in-memory
	W0924 00:55:01.471023       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 00:55:01.471063       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 00:55:01.471073       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 00:55:01.471106       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 00:55:01.557108       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 00:55:01.557185       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:55:01.560493       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 00:55:01.560593       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 00:55:01.560899       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 00:55:01.562855       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:55:01.663464       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [eab4602e62c3dd9824a9eb13d4f91b047b48f1853a1937f1281ada38d22ca917] <==
	I0924 00:54:45.541255       1 serving.go:386] Generated self-signed cert in-memory
	W0924 00:54:47.474679       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 00:54:47.474807       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 00:54:47.474908       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 00:54:47.474944       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 00:54:47.573661       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 00:54:47.574501       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:54:47.585680       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 00:54:47.586031       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 00:54:47.588273       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:54:47.586798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 00:54:47.689343       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 00:54:56.217123       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0924 00:54:56.217329       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:54:58.190890    3454 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-619300"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: E0924 00:54:58.192109    3454 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.119:8443: connect: connection refused" node="kubernetes-upgrade-619300"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:54:58.229260    3454 scope.go:117] "RemoveContainer" containerID="d22beba76fcc66acfcb4ce98155813232d1428822cf839666135473fbe7e3be5"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:54:58.235817    3454 scope.go:117] "RemoveContainer" containerID="20777011a72e70ba919a4881774976eb696478952272908ab22a3d12f0113ca5"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:54:58.238481    3454 scope.go:117] "RemoveContainer" containerID="81cb509fe10f9d365c1b67f2d5fb6bc55cf044a470a5612e7b6dba1a06fdc30b"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:54:58.246859    3454 scope.go:117] "RemoveContainer" containerID="eab4602e62c3dd9824a9eb13d4f91b047b48f1853a1937f1281ada38d22ca917"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: E0924 00:54:58.413659    3454 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-619300?timeout=10s\": dial tcp 192.168.39.119:8443: connect: connection refused" interval="800ms"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:54:58.593542    3454 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-619300"
	Sep 24 00:54:58 kubernetes-upgrade-619300 kubelet[3454]: E0924 00:54:58.594496    3454 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.119:8443: connect: connection refused" node="kubernetes-upgrade-619300"
	Sep 24 00:54:59 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:54:59.396116    3454 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-619300"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.696286    3454 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-619300"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.696687    3454 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-619300"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.696757    3454 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.698036    3454 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.786976    3454 apiserver.go:52] "Watching apiserver"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.807765    3454 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.847889    3454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fa0b89ed-02d7-45c0-a9a3-277014540615-tmp\") pod \"storage-provisioner\" (UID: \"fa0b89ed-02d7-45c0-a9a3-277014540615\") " pod="kube-system/storage-provisioner"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.847978    3454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ac02a34-1f6b-4743-8b33-645bd8cf8cb7-lib-modules\") pod \"kube-proxy-hjktr\" (UID: \"2ac02a34-1f6b-4743-8b33-645bd8cf8cb7\") " pod="kube-system/kube-proxy-hjktr"
	Sep 24 00:55:01 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:01.848014    3454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ac02a34-1f6b-4743-8b33-645bd8cf8cb7-xtables-lock\") pod \"kube-proxy-hjktr\" (UID: \"2ac02a34-1f6b-4743-8b33-645bd8cf8cb7\") " pod="kube-system/kube-proxy-hjktr"
	Sep 24 00:55:02 kubernetes-upgrade-619300 kubelet[3454]: E0924 00:55:02.017542    3454 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-619300\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-619300"
	Sep 24 00:55:02 kubernetes-upgrade-619300 kubelet[3454]: E0924 00:55:02.018011    3454 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-619300\" already exists" pod="kube-system/etcd-kubernetes-upgrade-619300"
	Sep 24 00:55:02 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:02.091884    3454 scope.go:117] "RemoveContainer" containerID="12472f694bd14c4a4fb6634ed6710090c7158df84a451925ea3c7fd759ed07f3"
	Sep 24 00:55:02 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:02.092216    3454 scope.go:117] "RemoveContainer" containerID="41cceebe8b6802b592a7b58833d203d2964a377b6aef45541961bbcfe6bfa100"
	Sep 24 00:55:02 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:02.092698    3454 scope.go:117] "RemoveContainer" containerID="e685db9fc36a9f9ce0bda5750174f71bcc210ccb9e17a110d317971eff0d0e58"
	Sep 24 00:55:04 kubernetes-upgrade-619300 kubelet[3454]: I0924 00:55:04.122260    3454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [41cceebe8b6802b592a7b58833d203d2964a377b6aef45541961bbcfe6bfa100] <==
	I0924 00:54:45.642403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [72b502d6812744ae5aef42bac1a46b405faea491a7d829daa4bbe91337594db4] <==
	I0924 00:55:02.219253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 00:55:02.247548       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 00:55:02.247657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 00:55:02.277545       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 00:55:02.277695       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-619300_1c6f300d-42b0-4a07-9063-98b1c785aad5!
	I0924 00:55:02.280223       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3d1d166-4a70-4499-8c03-81d8f2d2d11c", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-619300_1c6f300d-42b0-4a07-9063-98b1c785aad5 became leader
	I0924 00:55:02.378912       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-619300_1c6f300d-42b0-4a07-9063-98b1c785aad5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-619300 -n kubernetes-upgrade-619300
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-619300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-619300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-619300
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-619300: (1.252978717s)
--- FAIL: TestKubernetesUpgrade (392.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (29.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-198857 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-198857 --driver=kvm2  --container-runtime=crio: signal: killed (29.071725198s)

                                                
                                                
-- stdout --
	* [NoKubernetes-198857] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-198857
	* Restarting existing kvm2 VM for "NoKubernetes-198857" ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-198857 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-198857 -n NoKubernetes-198857
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-198857 -n NoKubernetes-198857: exit status 6 (283.050845ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:52:15.109812   55939 status.go:451] kubeconfig endpoint: get endpoint: "NoKubernetes-198857" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-198857" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (29.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-171598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-171598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.113303859s)

                                                
                                                
-- stdout --
	* [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:54:35.667001   58197 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:54:35.667173   58197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:54:35.667183   58197 out.go:358] Setting ErrFile to fd 2...
	I0924 00:54:35.667190   58197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:54:35.667452   58197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:54:35.668309   58197 out.go:352] Setting JSON to false
	I0924 00:54:35.669684   58197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5820,"bootTime":1727133456,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:54:35.669768   58197 start.go:139] virtualization: kvm guest
	I0924 00:54:35.672456   58197 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:54:35.674335   58197 notify.go:220] Checking for updates...
	I0924 00:54:35.675002   58197 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:54:35.676627   58197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:54:35.678285   58197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:54:35.679698   58197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:54:35.681112   58197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:54:35.682433   58197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:54:35.684541   58197 config.go:182] Loaded profile config "cert-expiration-811247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:54:35.684712   58197 config.go:182] Loaded profile config "kubernetes-upgrade-619300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:54:35.684877   58197 config.go:182] Loaded profile config "stopped-upgrade-075175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0924 00:54:35.684991   58197 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:54:35.733634   58197 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 00:54:35.735260   58197 start.go:297] selected driver: kvm2
	I0924 00:54:35.735281   58197 start.go:901] validating driver "kvm2" against <nil>
	I0924 00:54:35.735296   58197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:54:35.736373   58197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:54:35.736487   58197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 00:54:35.755975   58197 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 00:54:35.756036   58197 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 00:54:35.756379   58197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:54:35.756425   58197 cni.go:84] Creating CNI manager for ""
	I0924 00:54:35.756494   58197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:54:35.756510   58197 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 00:54:35.756602   58197 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:54:35.756753   58197 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:54:35.759240   58197 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 00:54:35.760765   58197 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 00:54:35.760837   58197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 00:54:35.760864   58197 cache.go:56] Caching tarball of preloaded images
	I0924 00:54:35.760989   58197 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 00:54:35.761003   58197 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 00:54:35.761131   58197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 00:54:35.761153   58197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json: {Name:mk3c3ad3d1ff46951caf36a7369b876fc77e57ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:54:35.761357   58197 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 00:54:40.265458   58197 start.go:364] duration metric: took 4.504052994s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 00:54:40.265525   58197 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 00:54:40.265644   58197 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 00:54:40.566118   58197 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 00:54:40.566457   58197 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:54:40.566533   58197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:54:40.582838   58197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0924 00:54:40.583440   58197 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:54:40.584078   58197 main.go:141] libmachine: Using API Version  1
	I0924 00:54:40.584100   58197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:54:40.584486   58197 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:54:40.584709   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 00:54:40.584838   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:54:40.584972   58197 start.go:159] libmachine.API.Create for "old-k8s-version-171598" (driver="kvm2")
	I0924 00:54:40.585009   58197 client.go:168] LocalClient.Create starting
	I0924 00:54:40.585045   58197 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 00:54:40.585084   58197 main.go:141] libmachine: Decoding PEM data...
	I0924 00:54:40.585110   58197 main.go:141] libmachine: Parsing certificate...
	I0924 00:54:40.585257   58197 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 00:54:40.585305   58197 main.go:141] libmachine: Decoding PEM data...
	I0924 00:54:40.585324   58197 main.go:141] libmachine: Parsing certificate...
	I0924 00:54:40.585355   58197 main.go:141] libmachine: Running pre-create checks...
	I0924 00:54:40.585366   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .PreCreateCheck
	I0924 00:54:40.585738   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 00:54:40.586129   58197 main.go:141] libmachine: Creating machine...
	I0924 00:54:40.586142   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .Create
	I0924 00:54:40.586296   58197 main.go:141] libmachine: (old-k8s-version-171598) Creating KVM machine...
	I0924 00:54:40.587557   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found existing default KVM network
	I0924 00:54:40.589250   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:40.589050   58277 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c7:37:12} reservation:<nil>}
	I0924 00:54:40.591884   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:40.591746   58277 network.go:209] skipping subnet 192.168.50.0/24 that is reserved: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0924 00:54:40.592965   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:40.592857   58277 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:6f:04} reservation:<nil>}
	I0924 00:54:40.594116   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:40.594024   58277 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:da:eb} reservation:<nil>}
	I0924 00:54:40.595400   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:40.595295   58277 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000113d00}
	I0924 00:54:40.595430   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | created network xml: 
	I0924 00:54:40.595455   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | <network>
	I0924 00:54:40.595474   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |   <name>mk-old-k8s-version-171598</name>
	I0924 00:54:40.595513   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |   <dns enable='no'/>
	I0924 00:54:40.595541   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |   
	I0924 00:54:40.595558   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0924 00:54:40.595573   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |     <dhcp>
	I0924 00:54:40.595586   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0924 00:54:40.595596   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |     </dhcp>
	I0924 00:54:40.595605   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |   </ip>
	I0924 00:54:40.595611   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG |   
	I0924 00:54:40.595631   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | </network>
	I0924 00:54:40.595642   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | 
	I0924 00:54:40.967604   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | trying to create private KVM network mk-old-k8s-version-171598 192.168.83.0/24...
	I0924 00:54:41.050218   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | private KVM network mk-old-k8s-version-171598 192.168.83.0/24 created
	I0924 00:54:41.050253   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:41.050103   58277 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:54:41.050268   58197 main.go:141] libmachine: (old-k8s-version-171598) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598 ...
	I0924 00:54:41.050313   58197 main.go:141] libmachine: (old-k8s-version-171598) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 00:54:41.050332   58197 main.go:141] libmachine: (old-k8s-version-171598) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 00:54:41.316516   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:41.316357   58277 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa...
	I0924 00:54:41.926895   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:41.926728   58277 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/old-k8s-version-171598.rawdisk...
	I0924 00:54:41.926932   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Writing magic tar header
	I0924 00:54:41.926957   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Writing SSH key tar header
	I0924 00:54:41.927046   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:41.926936   58277 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598 ...
	I0924 00:54:41.953667   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598
	I0924 00:54:41.953727   58197 main.go:141] libmachine: (old-k8s-version-171598) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598 (perms=drwx------)
	I0924 00:54:41.953741   58197 main.go:141] libmachine: (old-k8s-version-171598) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 00:54:41.953752   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 00:54:41.953765   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:54:41.953775   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 00:54:41.953785   58197 main.go:141] libmachine: (old-k8s-version-171598) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 00:54:41.953801   58197 main.go:141] libmachine: (old-k8s-version-171598) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 00:54:41.953810   58197 main.go:141] libmachine: (old-k8s-version-171598) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 00:54:41.953825   58197 main.go:141] libmachine: (old-k8s-version-171598) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 00:54:41.953837   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 00:54:41.953846   58197 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 00:54:41.953861   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Checking permissions on dir: /home/jenkins
	I0924 00:54:41.953875   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Checking permissions on dir: /home
	I0924 00:54:41.953888   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Skipping /home - not owner
	I0924 00:54:41.955206   58197 main.go:141] libmachine: (old-k8s-version-171598) define libvirt domain using xml: 
	I0924 00:54:41.955234   58197 main.go:141] libmachine: (old-k8s-version-171598) <domain type='kvm'>
	I0924 00:54:41.955257   58197 main.go:141] libmachine: (old-k8s-version-171598)   <name>old-k8s-version-171598</name>
	I0924 00:54:41.955269   58197 main.go:141] libmachine: (old-k8s-version-171598)   <memory unit='MiB'>2200</memory>
	I0924 00:54:41.955278   58197 main.go:141] libmachine: (old-k8s-version-171598)   <vcpu>2</vcpu>
	I0924 00:54:41.955286   58197 main.go:141] libmachine: (old-k8s-version-171598)   <features>
	I0924 00:54:41.955313   58197 main.go:141] libmachine: (old-k8s-version-171598)     <acpi/>
	I0924 00:54:41.955325   58197 main.go:141] libmachine: (old-k8s-version-171598)     <apic/>
	I0924 00:54:41.955360   58197 main.go:141] libmachine: (old-k8s-version-171598)     <pae/>
	I0924 00:54:41.955384   58197 main.go:141] libmachine: (old-k8s-version-171598)     
	I0924 00:54:41.955393   58197 main.go:141] libmachine: (old-k8s-version-171598)   </features>
	I0924 00:54:41.955400   58197 main.go:141] libmachine: (old-k8s-version-171598)   <cpu mode='host-passthrough'>
	I0924 00:54:41.955408   58197 main.go:141] libmachine: (old-k8s-version-171598)   
	I0924 00:54:41.955417   58197 main.go:141] libmachine: (old-k8s-version-171598)   </cpu>
	I0924 00:54:41.955424   58197 main.go:141] libmachine: (old-k8s-version-171598)   <os>
	I0924 00:54:41.955431   58197 main.go:141] libmachine: (old-k8s-version-171598)     <type>hvm</type>
	I0924 00:54:41.955441   58197 main.go:141] libmachine: (old-k8s-version-171598)     <boot dev='cdrom'/>
	I0924 00:54:41.955459   58197 main.go:141] libmachine: (old-k8s-version-171598)     <boot dev='hd'/>
	I0924 00:54:41.955470   58197 main.go:141] libmachine: (old-k8s-version-171598)     <bootmenu enable='no'/>
	I0924 00:54:41.955477   58197 main.go:141] libmachine: (old-k8s-version-171598)   </os>
	I0924 00:54:41.955484   58197 main.go:141] libmachine: (old-k8s-version-171598)   <devices>
	I0924 00:54:41.955494   58197 main.go:141] libmachine: (old-k8s-version-171598)     <disk type='file' device='cdrom'>
	I0924 00:54:41.955506   58197 main.go:141] libmachine: (old-k8s-version-171598)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/boot2docker.iso'/>
	I0924 00:54:41.955522   58197 main.go:141] libmachine: (old-k8s-version-171598)       <target dev='hdc' bus='scsi'/>
	I0924 00:54:41.955534   58197 main.go:141] libmachine: (old-k8s-version-171598)       <readonly/>
	I0924 00:54:41.955540   58197 main.go:141] libmachine: (old-k8s-version-171598)     </disk>
	I0924 00:54:41.955549   58197 main.go:141] libmachine: (old-k8s-version-171598)     <disk type='file' device='disk'>
	I0924 00:54:41.955557   58197 main.go:141] libmachine: (old-k8s-version-171598)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 00:54:41.955571   58197 main.go:141] libmachine: (old-k8s-version-171598)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/old-k8s-version-171598.rawdisk'/>
	I0924 00:54:41.955581   58197 main.go:141] libmachine: (old-k8s-version-171598)       <target dev='hda' bus='virtio'/>
	I0924 00:54:41.955589   58197 main.go:141] libmachine: (old-k8s-version-171598)     </disk>
	I0924 00:54:41.955599   58197 main.go:141] libmachine: (old-k8s-version-171598)     <interface type='network'>
	I0924 00:54:41.955628   58197 main.go:141] libmachine: (old-k8s-version-171598)       <source network='mk-old-k8s-version-171598'/>
	I0924 00:54:41.955658   58197 main.go:141] libmachine: (old-k8s-version-171598)       <model type='virtio'/>
	I0924 00:54:41.955674   58197 main.go:141] libmachine: (old-k8s-version-171598)     </interface>
	I0924 00:54:41.955696   58197 main.go:141] libmachine: (old-k8s-version-171598)     <interface type='network'>
	I0924 00:54:41.955705   58197 main.go:141] libmachine: (old-k8s-version-171598)       <source network='default'/>
	I0924 00:54:41.955711   58197 main.go:141] libmachine: (old-k8s-version-171598)       <model type='virtio'/>
	I0924 00:54:41.955724   58197 main.go:141] libmachine: (old-k8s-version-171598)     </interface>
	I0924 00:54:41.955731   58197 main.go:141] libmachine: (old-k8s-version-171598)     <serial type='pty'>
	I0924 00:54:41.955738   58197 main.go:141] libmachine: (old-k8s-version-171598)       <target port='0'/>
	I0924 00:54:41.955743   58197 main.go:141] libmachine: (old-k8s-version-171598)     </serial>
	I0924 00:54:41.955773   58197 main.go:141] libmachine: (old-k8s-version-171598)     <console type='pty'>
	I0924 00:54:41.955802   58197 main.go:141] libmachine: (old-k8s-version-171598)       <target type='serial' port='0'/>
	I0924 00:54:41.955814   58197 main.go:141] libmachine: (old-k8s-version-171598)     </console>
	I0924 00:54:41.955828   58197 main.go:141] libmachine: (old-k8s-version-171598)     <rng model='virtio'>
	I0924 00:54:41.955842   58197 main.go:141] libmachine: (old-k8s-version-171598)       <backend model='random'>/dev/random</backend>
	I0924 00:54:41.955852   58197 main.go:141] libmachine: (old-k8s-version-171598)     </rng>
	I0924 00:54:41.955860   58197 main.go:141] libmachine: (old-k8s-version-171598)     
	I0924 00:54:41.955872   58197 main.go:141] libmachine: (old-k8s-version-171598)     
	I0924 00:54:41.955883   58197 main.go:141] libmachine: (old-k8s-version-171598)   </devices>
	I0924 00:54:41.955890   58197 main.go:141] libmachine: (old-k8s-version-171598) </domain>
	I0924 00:54:41.955899   58197 main.go:141] libmachine: (old-k8s-version-171598) 
	I0924 00:54:42.083553   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:14:78:aa in network default
	I0924 00:54:42.084310   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:42.084367   58197 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 00:54:42.085211   58197 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 00:54:42.085682   58197 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 00:54:42.086360   58197 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 00:54:42.087218   58197 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 00:54:43.610949   58197 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 00:54:43.611798   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:43.612513   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:43.612572   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:43.612482   58277 retry.go:31] will retry after 224.42885ms: waiting for machine to come up
	I0924 00:54:43.839150   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:43.839857   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:43.839891   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:43.839799   58277 retry.go:31] will retry after 275.98256ms: waiting for machine to come up
	I0924 00:54:44.117653   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:44.118446   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:44.118467   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:44.118404   58277 retry.go:31] will retry after 320.610525ms: waiting for machine to come up
	I0924 00:54:44.441174   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:44.441683   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:44.441703   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:44.441639   58277 retry.go:31] will retry after 412.121611ms: waiting for machine to come up
	I0924 00:54:44.855437   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:44.856018   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:44.856045   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:44.855976   58277 retry.go:31] will retry after 469.110734ms: waiting for machine to come up
	I0924 00:54:45.326703   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:45.327314   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:45.327348   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:45.327245   58277 retry.go:31] will retry after 732.613161ms: waiting for machine to come up
	I0924 00:54:46.061493   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:46.061964   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:46.061993   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:46.061927   58277 retry.go:31] will retry after 1.125990562s: waiting for machine to come up
	I0924 00:54:47.189213   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:47.189678   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:47.189728   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:47.189627   58277 retry.go:31] will retry after 1.34649733s: waiting for machine to come up
	I0924 00:54:48.537420   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:48.537986   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:48.538006   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:48.537922   58277 retry.go:31] will retry after 1.262417735s: waiting for machine to come up
	I0924 00:54:49.801554   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:49.802238   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:49.802282   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:49.802159   58277 retry.go:31] will retry after 2.047718833s: waiting for machine to come up
	I0924 00:54:51.851675   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:51.852261   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:51.852285   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:51.852167   58277 retry.go:31] will retry after 2.503553236s: waiting for machine to come up
	I0924 00:54:54.358926   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:54.359421   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:54.359442   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:54.359400   58277 retry.go:31] will retry after 3.635911662s: waiting for machine to come up
	I0924 00:54:57.997755   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:54:57.998281   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 00:54:57.998304   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 00:54:57.998237   58277 retry.go:31] will retry after 3.751754598s: waiting for machine to come up
	I0924 00:55:01.752198   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.752881   58197 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 00:55:01.752902   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.752908   58197 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 00:55:01.753279   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598
	I0924 00:55:01.840663   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 00:55:01.840687   58197 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 00:55:01.840699   58197 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 00:55:01.844238   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.844709   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:01.844740   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.844883   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 00:55:01.844907   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 00:55:01.844940   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 00:55:01.844965   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 00:55:01.844993   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 00:55:01.968726   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 00:55:01.968961   58197 main.go:141] libmachine: (old-k8s-version-171598) KVM machine creation complete!
	I0924 00:55:01.969312   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 00:55:01.969827   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:01.970085   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:01.970255   58197 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 00:55:01.970268   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 00:55:01.971910   58197 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 00:55:01.971926   58197 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 00:55:01.971950   58197 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 00:55:01.971959   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:01.974276   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.974699   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:01.974732   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:01.974853   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:01.975061   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:01.975230   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:01.975518   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:01.975694   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:01.975872   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:01.975885   58197 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 00:55:02.079653   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:55:02.079686   58197 main.go:141] libmachine: Detecting the provisioner...
	I0924 00:55:02.079694   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.083048   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.083491   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.083528   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.083737   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.083980   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.084159   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.084367   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.084570   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.084797   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.084813   58197 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 00:55:02.189230   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 00:55:02.189322   58197 main.go:141] libmachine: found compatible host: buildroot
	I0924 00:55:02.189330   58197 main.go:141] libmachine: Provisioning with buildroot...
	I0924 00:55:02.189338   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 00:55:02.189638   58197 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 00:55:02.189666   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 00:55:02.189901   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.192761   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.193213   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.193244   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.193388   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.193645   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.193838   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.194063   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.194270   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.194480   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.194498   58197 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 00:55:02.314997   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 00:55:02.315032   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.318138   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.318555   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.318599   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.318822   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.319087   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.319296   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.319455   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.319665   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.319874   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.319891   58197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:55:02.429171   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:55:02.429200   58197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 00:55:02.429247   58197 buildroot.go:174] setting up certificates
	I0924 00:55:02.429262   58197 provision.go:84] configureAuth start
	I0924 00:55:02.429277   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 00:55:02.429579   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 00:55:02.432395   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.432823   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.432850   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.433101   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.435504   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.435869   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.435890   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.436017   58197 provision.go:143] copyHostCerts
	I0924 00:55:02.436086   58197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 00:55:02.436105   58197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 00:55:02.436164   58197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 00:55:02.436305   58197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 00:55:02.436318   58197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 00:55:02.436367   58197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 00:55:02.436475   58197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 00:55:02.436485   58197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 00:55:02.436511   58197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 00:55:02.436610   58197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 00:55:02.528742   58197 provision.go:177] copyRemoteCerts
	I0924 00:55:02.528805   58197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:55:02.528907   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.532302   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.532792   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.532823   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.533033   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.533224   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.533448   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.533632   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:02.620500   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 00:55:02.655188   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 00:55:02.685047   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 00:55:02.715935   58197 provision.go:87] duration metric: took 286.660949ms to configureAuth
	I0924 00:55:02.715966   58197 buildroot.go:189] setting minikube options for container-runtime
	I0924 00:55:02.716128   58197 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 00:55:02.716205   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:02.719908   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.720247   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:02.720274   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:02.720543   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:02.720723   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.720836   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:02.721001   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:02.721165   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:02.721376   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:02.721403   58197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 00:55:03.018082   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 00:55:03.018107   58197 main.go:141] libmachine: Checking connection to Docker...
	I0924 00:55:03.018115   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetURL
	I0924 00:55:03.020121   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using libvirt version 6000000
	I0924 00:55:03.023905   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.024403   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.024437   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.024783   58197 main.go:141] libmachine: Docker is up and running!
	I0924 00:55:03.024800   58197 main.go:141] libmachine: Reticulating splines...
	I0924 00:55:03.024807   58197 client.go:171] duration metric: took 22.439787669s to LocalClient.Create
	I0924 00:55:03.024826   58197 start.go:167] duration metric: took 22.439855123s to libmachine.API.Create "old-k8s-version-171598"
	I0924 00:55:03.024834   58197 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 00:55:03.024843   58197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:55:03.024857   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.025082   58197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:55:03.025114   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.027876   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.028199   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.028229   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.028443   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.028598   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.028727   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.028874   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:03.120687   58197 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:55:03.124786   58197 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 00:55:03.124817   58197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 00:55:03.124903   58197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 00:55:03.125015   58197 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 00:55:03.125258   58197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 00:55:03.135695   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:55:03.165018   58197 start.go:296] duration metric: took 140.171374ms for postStartSetup
	I0924 00:55:03.165070   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 00:55:03.165759   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 00:55:03.169024   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.169673   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.169702   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.170098   58197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 00:55:03.170347   58197 start.go:128] duration metric: took 22.904687506s to createHost
	I0924 00:55:03.170378   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.173735   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.174051   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.174080   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.174389   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.174585   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.174884   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.175094   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.175291   58197 main.go:141] libmachine: Using SSH client type: native
	I0924 00:55:03.175483   58197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 00:55:03.175496   58197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 00:55:03.293095   58197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139303.249080601
	
	I0924 00:55:03.293129   58197 fix.go:216] guest clock: 1727139303.249080601
	I0924 00:55:03.293136   58197 fix.go:229] Guest: 2024-09-24 00:55:03.249080601 +0000 UTC Remote: 2024-09-24 00:55:03.170363849 +0000 UTC m=+27.555541393 (delta=78.716752ms)
	I0924 00:55:03.293158   58197 fix.go:200] guest clock delta is within tolerance: 78.716752ms
	I0924 00:55:03.293164   58197 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 23.027668037s
	I0924 00:55:03.293198   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.293446   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 00:55:03.296716   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.297182   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.297214   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.297416   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.297967   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.298174   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 00:55:03.298287   58197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:55:03.298323   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.298447   58197 ssh_runner.go:195] Run: cat /version.json
	I0924 00:55:03.298475   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 00:55:03.301444   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.301611   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.301843   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.301867   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.302017   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.302108   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:03.302128   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:03.302382   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 00:55:03.302396   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.302586   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 00:55:03.302585   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.302796   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 00:55:03.302794   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:03.302952   58197 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 00:55:03.386649   58197 ssh_runner.go:195] Run: systemctl --version
	I0924 00:55:03.424758   58197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 00:55:03.614118   58197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 00:55:03.621530   58197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 00:55:03.621626   58197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:55:03.644213   58197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 00:55:03.644235   58197 start.go:495] detecting cgroup driver to use...
	I0924 00:55:03.644313   58197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 00:55:03.668161   58197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 00:55:03.694828   58197 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:55:03.694904   58197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:55:03.713736   58197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:55:03.730803   58197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:55:03.899461   58197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:55:04.063545   58197 docker.go:233] disabling docker service ...
	I0924 00:55:04.063639   58197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:55:04.083164   58197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:55:04.097906   58197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:55:04.269668   58197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:55:04.402316   58197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:55:04.418705   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:55:04.440370   58197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 00:55:04.440437   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.451665   58197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 00:55:04.451754   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.462760   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.475972   58197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 00:55:04.490841   58197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:55:04.504910   58197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:55:04.517644   58197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 00:55:04.517717   58197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 00:55:04.531792   58197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:55:04.543238   58197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:55:04.662287   58197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 00:55:04.779583   58197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 00:55:04.779661   58197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 00:55:04.785323   58197 start.go:563] Will wait 60s for crictl version
	I0924 00:55:04.785424   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:04.790385   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:55:04.841899   58197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 00:55:04.841986   58197 ssh_runner.go:195] Run: crio --version
	I0924 00:55:04.877618   58197 ssh_runner.go:195] Run: crio --version
	I0924 00:55:04.919522   58197 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 00:55:04.920721   58197 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 00:55:04.924275   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:04.924972   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 01:54:56 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 00:55:04.924991   58197 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 00:55:04.926038   58197 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 00:55:04.931574   58197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:55:04.948996   58197 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:55:04.949142   58197 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 00:55:04.949213   58197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:55:04.994604   58197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 00:55:04.994664   58197 ssh_runner.go:195] Run: which lz4
	I0924 00:55:04.999098   58197 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 00:55:05.003940   58197 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 00:55:05.003978   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 00:55:06.629690   58197 crio.go:462] duration metric: took 1.630625581s to copy over tarball
	I0924 00:55:06.629769   58197 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 00:55:09.431945   58197 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.802147974s)
	I0924 00:55:09.431981   58197 crio.go:469] duration metric: took 2.802260804s to extract the tarball
	I0924 00:55:09.431992   58197 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 00:55:09.476258   58197 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:55:09.520740   58197 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 00:55:09.520771   58197 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 00:55:09.520835   58197 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:55:09.520845   58197 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:55:09.520852   58197 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:55:09.520892   58197 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 00:55:09.520903   58197 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:55:09.521077   58197 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:55:09.521133   58197 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 00:55:09.521151   58197 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 00:55:09.522736   58197 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:55:09.522855   58197 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 00:55:09.523096   58197 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:55:09.523265   58197 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:55:09.523276   58197 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:55:09.523987   58197 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:55:09.524413   58197 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 00:55:09.524440   58197 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 00:55:09.747170   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 00:55:09.795033   58197 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 00:55:09.795077   58197 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 00:55:09.795121   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:09.799304   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 00:55:09.843706   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 00:55:09.845992   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:55:09.850443   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:55:09.854306   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 00:55:09.864960   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:55:09.875382   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 00:55:09.887671   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:55:09.927128   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 00:55:10.011970   58197 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 00:55:10.012020   58197 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:55:10.012070   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:10.032032   58197 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 00:55:10.032097   58197 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 00:55:10.032141   58197 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 00:55:10.032199   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:10.032105   58197 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 00:55:10.032311   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:10.032050   58197 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 00:55:10.032398   58197 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:55:10.032451   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:10.032041   58197 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 00:55:10.032486   58197 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:55:10.032515   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:10.063272   58197 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 00:55:10.063322   58197 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:55:10.063390   58197 ssh_runner.go:195] Run: which crictl
	I0924 00:55:10.074521   58197 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 00:55:10.074569   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:55:10.074622   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:55:10.074777   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 00:55:10.074780   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 00:55:10.074847   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:55:10.074896   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:55:10.203250   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:55:10.204480   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:55:10.226319   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 00:55:10.226406   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:55:10.226445   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:55:10.226635   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 00:55:10.311395   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 00:55:10.316374   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 00:55:10.370891   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 00:55:10.378600   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 00:55:10.378619   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 00:55:10.378663   58197 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 00:55:10.404004   58197 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 00:55:10.409284   58197 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 00:55:10.447183   58197 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 00:55:10.477799   58197 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 00:55:10.487928   58197 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 00:55:10.487952   58197 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 00:55:10.718161   58197 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:55:10.858849   58197 cache_images.go:92] duration metric: took 1.338064189s to LoadCachedImages
	W0924 00:55:10.858965   58197 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0924 00:55:10.858982   58197 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 00:55:10.859100   58197 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:55:10.859209   58197 ssh_runner.go:195] Run: crio config
	I0924 00:55:10.914947   58197 cni.go:84] Creating CNI manager for ""
	I0924 00:55:10.914970   58197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 00:55:10.914981   58197 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:55:10.915000   58197 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 00:55:10.915160   58197 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:55:10.915235   58197 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 00:55:10.926591   58197 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:55:10.926669   58197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 00:55:10.937086   58197 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 00:55:10.955192   58197 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:55:10.975809   58197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 00:55:10.995353   58197 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 00:55:10.999588   58197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:55:11.012319   58197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:55:11.150680   58197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:55:11.169341   58197 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 00:55:11.169366   58197 certs.go:194] generating shared ca certs ...
	I0924 00:55:11.169386   58197 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:11.169576   58197 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 00:55:11.169635   58197 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 00:55:11.169648   58197 certs.go:256] generating profile certs ...
	I0924 00:55:11.169730   58197 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 00:55:11.169749   58197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt with IP's: []
	I0924 00:55:11.430685   58197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt ...
	I0924 00:55:11.430717   58197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: {Name:mk616c57559c12729f15fc6afa6340c84f32404b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:11.430922   58197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key ...
	I0924 00:55:11.430939   58197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key: {Name:mk5e1df5936a7518cda5d33542be867b8911f037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:11.431056   58197 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 00:55:11.431074   58197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt.577554d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.3]
	I0924 00:55:11.883770   58197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt.577554d3 ...
	I0924 00:55:11.883816   58197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt.577554d3: {Name:mkc35d328d004db968a560da2814aa8a204607ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:11.884029   58197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3 ...
	I0924 00:55:11.884049   58197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3: {Name:mkd8777464b5dd7ebecde7ec9927e3e1b2235b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:11.884159   58197 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt.577554d3 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt
	I0924 00:55:11.884291   58197 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key
	I0924 00:55:11.884414   58197 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 00:55:11.884439   58197 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt with IP's: []
	I0924 00:55:12.061948   58197 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt ...
	I0924 00:55:12.061981   58197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt: {Name:mkbdcaee3095c012d8003e8c8a96b2287ba0a954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:12.062168   58197 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key ...
	I0924 00:55:12.062186   58197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key: {Name:mkf18d79cc3b0f3df25fedb6ab2177c4fe89a5f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:55:12.062383   58197 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 00:55:12.062438   58197 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 00:55:12.062454   58197 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:55:12.062499   58197 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 00:55:12.062533   58197 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:55:12.062566   58197 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 00:55:12.062626   58197 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 00:55:12.063237   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:55:12.105982   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:55:12.145329   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:55:12.170145   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 00:55:12.194790   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 00:55:12.219640   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 00:55:12.245732   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:55:12.268826   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 00:55:12.291732   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:55:12.314938   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 00:55:12.341672   58197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 00:55:12.368258   58197 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:55:12.386468   58197 ssh_runner.go:195] Run: openssl version
	I0924 00:55:12.392885   58197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:55:12.403889   58197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:55:12.408583   58197 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:55:12.408639   58197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:55:12.415043   58197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:55:12.426581   58197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 00:55:12.438775   58197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 00:55:12.443896   58197 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 00:55:12.443968   58197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 00:55:12.451943   58197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 00:55:12.463415   58197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 00:55:12.475627   58197 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 00:55:12.480586   58197 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 00:55:12.480655   58197 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 00:55:12.486577   58197 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 00:55:12.497456   58197 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:55:12.502868   58197 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:55:12.502937   58197 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:55:12.503037   58197 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 00:55:12.503108   58197 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:55:12.540729   58197 cri.go:89] found id: ""
	I0924 00:55:12.540807   58197 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 00:55:12.550453   58197 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 00:55:12.560243   58197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 00:55:12.570448   58197 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 00:55:12.570471   58197 kubeadm.go:157] found existing configuration files:
	
	I0924 00:55:12.570544   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 00:55:12.579773   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 00:55:12.579836   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 00:55:12.589453   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 00:55:12.598775   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 00:55:12.598836   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 00:55:12.608535   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 00:55:12.618445   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 00:55:12.618520   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 00:55:12.627871   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 00:55:12.640097   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 00:55:12.640164   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 00:55:12.649427   58197 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 00:55:12.763381   58197 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 00:55:12.763453   58197 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 00:55:12.939530   58197 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 00:55:12.939723   58197 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 00:55:12.939876   58197 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 00:55:13.137960   58197 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 00:55:13.140188   58197 out.go:235]   - Generating certificates and keys ...
	I0924 00:55:13.140309   58197 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 00:55:13.140397   58197 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 00:55:13.254000   58197 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 00:55:13.413517   58197 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 00:55:13.476007   58197 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 00:55:13.610945   58197 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 00:55:13.696657   58197 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 00:55:13.696968   58197 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-171598] and IPs [192.168.83.3 127.0.0.1 ::1]
	I0924 00:55:13.887739   58197 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 00:55:13.887927   58197 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-171598] and IPs [192.168.83.3 127.0.0.1 ::1]
	I0924 00:55:14.067317   58197 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 00:55:14.142058   58197 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 00:55:14.260531   58197 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 00:55:14.260648   58197 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:55:14.699218   58197 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:55:14.791856   58197 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:55:14.971392   58197 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:55:15.065251   58197 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:55:15.094012   58197 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:55:15.095244   58197 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:55:15.095332   58197 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:55:15.242552   58197 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:55:15.244496   58197 out.go:235]   - Booting up control plane ...
	I0924 00:55:15.244633   58197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:55:15.254906   58197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:55:15.258639   58197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:55:15.259617   58197 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:55:15.272758   58197 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 00:55:55.239128   58197 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 00:55:55.239882   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:55:55.240069   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:56:00.238991   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:56:00.239233   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:56:10.238009   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:56:10.238276   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:56:30.238236   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:56:30.238455   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:57:10.236498   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:57:10.236811   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:57:10.236835   58197 kubeadm.go:310] 
	I0924 00:57:10.236886   58197 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 00:57:10.236952   58197 kubeadm.go:310] 		timed out waiting for the condition
	I0924 00:57:10.236962   58197 kubeadm.go:310] 
	I0924 00:57:10.237018   58197 kubeadm.go:310] 	This error is likely caused by:
	I0924 00:57:10.237058   58197 kubeadm.go:310] 		- The kubelet is not running
	I0924 00:57:10.237193   58197 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 00:57:10.237199   58197 kubeadm.go:310] 
	I0924 00:57:10.237337   58197 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 00:57:10.237381   58197 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 00:57:10.237425   58197 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 00:57:10.237431   58197 kubeadm.go:310] 
	I0924 00:57:10.237566   58197 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 00:57:10.237672   58197 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 00:57:10.237679   58197 kubeadm.go:310] 
	I0924 00:57:10.237806   58197 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 00:57:10.237920   58197 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 00:57:10.238016   58197 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 00:57:10.238106   58197 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 00:57:10.238113   58197 kubeadm.go:310] 
	I0924 00:57:10.239180   58197 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:57:10.239314   58197 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 00:57:10.239402   58197 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 00:57:10.239551   58197 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-171598] and IPs [192.168.83.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-171598] and IPs [192.168.83.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-171598] and IPs [192.168.83.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-171598] and IPs [192.168.83.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 00:57:10.239602   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 00:57:10.705530   58197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:57:10.721993   58197 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 00:57:10.731415   58197 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 00:57:10.731440   58197 kubeadm.go:157] found existing configuration files:
	
	I0924 00:57:10.731488   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 00:57:10.740743   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 00:57:10.740815   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 00:57:10.753240   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 00:57:10.765378   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 00:57:10.765453   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 00:57:10.776798   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 00:57:10.788549   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 00:57:10.788607   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 00:57:10.801385   58197 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 00:57:10.813926   58197 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 00:57:10.813995   58197 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 00:57:10.825711   58197 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 00:57:10.901635   58197 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 00:57:10.901769   58197 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 00:57:11.055037   58197 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 00:57:11.055185   58197 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 00:57:11.055352   58197 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 00:57:11.245084   58197 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 00:57:11.246648   58197 out.go:235]   - Generating certificates and keys ...
	I0924 00:57:11.246746   58197 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 00:57:11.246822   58197 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 00:57:11.246927   58197 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 00:57:11.247012   58197 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 00:57:11.247112   58197 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 00:57:11.247201   58197 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 00:57:11.247650   58197 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 00:57:11.248206   58197 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 00:57:11.248932   58197 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 00:57:11.249577   58197 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 00:57:11.249782   58197 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 00:57:11.249866   58197 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:57:11.395658   58197 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:57:11.450324   58197 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:57:11.753855   58197 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:57:11.956228   58197 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:57:11.977230   58197 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:57:11.978371   58197 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:57:11.978479   58197 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:57:12.123633   58197 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:57:12.125515   58197 out.go:235]   - Booting up control plane ...
	I0924 00:57:12.125645   58197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:57:12.132112   58197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:57:12.133042   58197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:57:12.133778   58197 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:57:12.135726   58197 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 00:57:52.137452   58197 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 00:57:52.137758   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:57:52.137994   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:57:57.138668   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:57:57.138876   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:58:07.139453   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:58:07.139638   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:58:27.141025   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:58:27.141259   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:59:07.141045   58197 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 00:59:07.141256   58197 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 00:59:07.141270   58197 kubeadm.go:310] 
	I0924 00:59:07.141310   58197 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 00:59:07.141343   58197 kubeadm.go:310] 		timed out waiting for the condition
	I0924 00:59:07.141353   58197 kubeadm.go:310] 
	I0924 00:59:07.141389   58197 kubeadm.go:310] 	This error is likely caused by:
	I0924 00:59:07.141420   58197 kubeadm.go:310] 		- The kubelet is not running
	I0924 00:59:07.141540   58197 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 00:59:07.141552   58197 kubeadm.go:310] 
	I0924 00:59:07.141667   58197 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 00:59:07.141705   58197 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 00:59:07.141752   58197 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 00:59:07.141762   58197 kubeadm.go:310] 
	I0924 00:59:07.141889   58197 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 00:59:07.141999   58197 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 00:59:07.142010   58197 kubeadm.go:310] 
	I0924 00:59:07.142138   58197 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 00:59:07.142257   58197 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 00:59:07.142356   58197 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 00:59:07.142452   58197 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 00:59:07.142461   58197 kubeadm.go:310] 
	I0924 00:59:07.143517   58197 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:59:07.143626   58197 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 00:59:07.143720   58197 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 00:59:07.143800   58197 kubeadm.go:394] duration metric: took 3m54.640867952s to StartCluster
	I0924 00:59:07.143838   58197 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 00:59:07.143889   58197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 00:59:07.195056   58197 cri.go:89] found id: ""
	I0924 00:59:07.195086   58197 logs.go:276] 0 containers: []
	W0924 00:59:07.195097   58197 logs.go:278] No container was found matching "kube-apiserver"
	I0924 00:59:07.195105   58197 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 00:59:07.195166   58197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 00:59:07.230743   58197 cri.go:89] found id: ""
	I0924 00:59:07.230767   58197 logs.go:276] 0 containers: []
	W0924 00:59:07.230776   58197 logs.go:278] No container was found matching "etcd"
	I0924 00:59:07.230783   58197 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 00:59:07.230844   58197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 00:59:07.263706   58197 cri.go:89] found id: ""
	I0924 00:59:07.263734   58197 logs.go:276] 0 containers: []
	W0924 00:59:07.263745   58197 logs.go:278] No container was found matching "coredns"
	I0924 00:59:07.263752   58197 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 00:59:07.263802   58197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 00:59:07.298583   58197 cri.go:89] found id: ""
	I0924 00:59:07.298613   58197 logs.go:276] 0 containers: []
	W0924 00:59:07.298622   58197 logs.go:278] No container was found matching "kube-scheduler"
	I0924 00:59:07.298628   58197 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 00:59:07.298681   58197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 00:59:07.335812   58197 cri.go:89] found id: ""
	I0924 00:59:07.335843   58197 logs.go:276] 0 containers: []
	W0924 00:59:07.335851   58197 logs.go:278] No container was found matching "kube-proxy"
	I0924 00:59:07.335857   58197 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 00:59:07.335965   58197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 00:59:07.368476   58197 cri.go:89] found id: ""
	I0924 00:59:07.368504   58197 logs.go:276] 0 containers: []
	W0924 00:59:07.368513   58197 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 00:59:07.368519   58197 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 00:59:07.368566   58197 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 00:59:07.401430   58197 cri.go:89] found id: ""
	I0924 00:59:07.401459   58197 logs.go:276] 0 containers: []
	W0924 00:59:07.401467   58197 logs.go:278] No container was found matching "kindnet"
	I0924 00:59:07.401477   58197 logs.go:123] Gathering logs for kubelet ...
	I0924 00:59:07.401489   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 00:59:07.450308   58197 logs.go:123] Gathering logs for dmesg ...
	I0924 00:59:07.450348   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 00:59:07.463692   58197 logs.go:123] Gathering logs for describe nodes ...
	I0924 00:59:07.463723   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 00:59:07.573920   58197 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 00:59:07.573944   58197 logs.go:123] Gathering logs for CRI-O ...
	I0924 00:59:07.573954   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 00:59:07.672349   58197 logs.go:123] Gathering logs for container status ...
	I0924 00:59:07.672390   58197 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 00:59:07.711470   58197 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 00:59:07.711544   58197 out.go:270] * 
	* 
	W0924 00:59:07.711608   58197 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 00:59:07.711631   58197 out.go:270] * 
	* 
	W0924 00:59:07.713001   58197 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 00:59:07.716033   58197 out.go:201] 
	W0924 00:59:07.717191   58197 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 00:59:07.717243   58197 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 00:59:07.717267   58197 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 00:59:07.718640   58197 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-171598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 6 (223.732066ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:59:07.991854   61193 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-171598" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-674057 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-674057 --alsologtostderr -v=3: exit status 82 (2m0.562984686s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-674057"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:56:27.277333   59725 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:56:27.277483   59725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:56:27.277497   59725 out.go:358] Setting ErrFile to fd 2...
	I0924 00:56:27.277504   59725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:56:27.277780   59725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:56:27.278086   59725 out.go:352] Setting JSON to false
	I0924 00:56:27.278191   59725 mustload.go:65] Loading cluster: no-preload-674057
	I0924 00:56:27.278721   59725 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:56:27.278822   59725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 00:56:27.279064   59725 mustload.go:65] Loading cluster: no-preload-674057
	I0924 00:56:27.279219   59725 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:56:27.279266   59725 stop.go:39] StopHost: no-preload-674057
	I0924 00:56:27.279873   59725 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:56:27.279918   59725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:56:27.298614   59725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0924 00:56:27.299482   59725 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:56:27.300188   59725 main.go:141] libmachine: Using API Version  1
	I0924 00:56:27.300213   59725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:56:27.300749   59725 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:56:27.303513   59725 out.go:177] * Stopping node "no-preload-674057"  ...
	I0924 00:56:27.304825   59725 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 00:56:27.304876   59725 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 00:56:27.305143   59725 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 00:56:27.305176   59725 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 00:56:27.308062   59725 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 00:56:27.308648   59725 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 01:55:18 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 00:56:27.308700   59725 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 00:56:27.308772   59725 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 00:56:27.308952   59725 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 00:56:27.309113   59725 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 00:56:27.309277   59725 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 00:56:27.427141   59725 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 00:56:27.498901   59725 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 00:56:27.552843   59725 main.go:141] libmachine: Stopping "no-preload-674057"...
	I0924 00:56:27.552879   59725 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 00:56:27.554636   59725 main.go:141] libmachine: (no-preload-674057) Calling .Stop
	I0924 00:56:27.558863   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 0/120
	I0924 00:56:28.560978   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 1/120
	I0924 00:56:29.562953   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 2/120
	I0924 00:56:30.570458   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 3/120
	I0924 00:56:31.572455   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 4/120
	I0924 00:56:32.575087   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 5/120
	I0924 00:56:33.577077   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 6/120
	I0924 00:56:34.578941   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 7/120
	I0924 00:56:35.581318   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 8/120
	I0924 00:56:36.583208   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 9/120
	I0924 00:56:37.585735   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 10/120
	I0924 00:56:38.587231   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 11/120
	I0924 00:56:39.588717   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 12/120
	I0924 00:56:40.591442   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 13/120
	I0924 00:56:41.593404   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 14/120
	I0924 00:56:42.595567   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 15/120
	I0924 00:56:43.596925   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 16/120
	I0924 00:56:44.598413   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 17/120
	I0924 00:56:45.599927   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 18/120
	I0924 00:56:46.601461   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 19/120
	I0924 00:56:47.604167   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 20/120
	I0924 00:56:48.605587   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 21/120
	I0924 00:56:49.607052   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 22/120
	I0924 00:56:50.608682   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 23/120
	I0924 00:56:51.610308   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 24/120
	I0924 00:56:52.612620   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 25/120
	I0924 00:56:53.614502   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 26/120
	I0924 00:56:54.617159   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 27/120
	I0924 00:56:55.618714   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 28/120
	I0924 00:56:56.620200   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 29/120
	I0924 00:56:57.622631   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 30/120
	I0924 00:56:58.624323   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 31/120
	I0924 00:56:59.625953   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 32/120
	I0924 00:57:00.627593   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 33/120
	I0924 00:57:01.628981   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 34/120
	I0924 00:57:02.630908   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 35/120
	I0924 00:57:03.632912   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 36/120
	I0924 00:57:04.634461   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 37/120
	I0924 00:57:05.636568   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 38/120
	I0924 00:57:06.638138   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 39/120
	I0924 00:57:07.640405   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 40/120
	I0924 00:57:08.641926   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 41/120
	I0924 00:57:09.643307   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 42/120
	I0924 00:57:10.644874   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 43/120
	I0924 00:57:11.647332   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 44/120
	I0924 00:57:12.649579   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 45/120
	I0924 00:57:13.651202   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 46/120
	I0924 00:57:14.652855   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 47/120
	I0924 00:57:15.654253   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 48/120
	I0924 00:57:16.656367   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 49/120
	I0924 00:57:17.658216   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 50/120
	I0924 00:57:18.659633   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 51/120
	I0924 00:57:19.661027   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 52/120
	I0924 00:57:20.662961   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 53/120
	I0924 00:57:21.664411   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 54/120
	I0924 00:57:22.666426   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 55/120
	I0924 00:57:23.668673   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 56/120
	I0924 00:57:24.670894   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 57/120
	I0924 00:57:25.672175   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 58/120
	I0924 00:57:26.673730   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 59/120
	I0924 00:57:27.676175   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 60/120
	I0924 00:57:28.677665   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 61/120
	I0924 00:57:29.678919   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 62/120
	I0924 00:57:30.680260   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 63/120
	I0924 00:57:31.681892   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 64/120
	I0924 00:57:32.683714   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 65/120
	I0924 00:57:33.685157   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 66/120
	I0924 00:57:34.686980   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 67/120
	I0924 00:57:35.688630   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 68/120
	I0924 00:57:36.690988   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 69/120
	I0924 00:57:37.693284   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 70/120
	I0924 00:57:38.694859   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 71/120
	I0924 00:57:39.696263   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 72/120
	I0924 00:57:40.697722   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 73/120
	I0924 00:57:41.699285   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 74/120
	I0924 00:57:42.701176   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 75/120
	I0924 00:57:43.703057   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 76/120
	I0924 00:57:44.704473   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 77/120
	I0924 00:57:45.705805   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 78/120
	I0924 00:57:46.707667   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 79/120
	I0924 00:57:47.709898   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 80/120
	I0924 00:57:48.711322   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 81/120
	I0924 00:57:49.712551   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 82/120
	I0924 00:57:50.714329   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 83/120
	I0924 00:57:51.715676   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 84/120
	I0924 00:57:52.717726   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 85/120
	I0924 00:57:53.719376   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 86/120
	I0924 00:57:54.720795   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 87/120
	I0924 00:57:55.723123   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 88/120
	I0924 00:57:56.724651   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 89/120
	I0924 00:57:57.725994   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 90/120
	I0924 00:57:58.727429   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 91/120
	I0924 00:57:59.728994   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 92/120
	I0924 00:58:00.730428   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 93/120
	I0924 00:58:01.732383   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 94/120
	I0924 00:58:02.734671   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 95/120
	I0924 00:58:03.736362   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 96/120
	I0924 00:58:04.738319   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 97/120
	I0924 00:58:05.740006   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 98/120
	I0924 00:58:06.741451   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 99/120
	I0924 00:58:07.743878   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 100/120
	I0924 00:58:08.745479   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 101/120
	I0924 00:58:09.747148   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 102/120
	I0924 00:58:10.748766   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 103/120
	I0924 00:58:11.750221   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 104/120
	I0924 00:58:12.752574   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 105/120
	I0924 00:58:13.754230   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 106/120
	I0924 00:58:14.755578   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 107/120
	I0924 00:58:15.757280   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 108/120
	I0924 00:58:16.759082   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 109/120
	I0924 00:58:17.761431   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 110/120
	I0924 00:58:18.762905   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 111/120
	I0924 00:58:19.764632   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 112/120
	I0924 00:58:20.766555   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 113/120
	I0924 00:58:21.768166   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 114/120
	I0924 00:58:22.770255   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 115/120
	I0924 00:58:23.771807   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 116/120
	I0924 00:58:24.773282   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 117/120
	I0924 00:58:25.774730   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 118/120
	I0924 00:58:26.776685   59725 main.go:141] libmachine: (no-preload-674057) Waiting for machine to stop 119/120
	I0924 00:58:27.777995   59725 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 00:58:27.778051   59725 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 00:58:27.779839   59725 out.go:201] 
	W0924 00:58:27.781160   59725 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 00:58:27.781180   59725 out.go:270] * 
	* 
	W0924 00:58:27.783831   59725 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 00:58:27.785082   59725 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-674057 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057
E0924 00:58:38.361573   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057: exit status 3 (18.502258459s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:58:46.288655   60794 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	E0924 00:58:46.288677   60794 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-674057" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-650507 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-650507 --alsologtostderr -v=3: exit status 82 (2m0.520047284s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-650507"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:56:38.669287   60138 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:56:38.669542   60138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:56:38.669551   60138 out.go:358] Setting ErrFile to fd 2...
	I0924 00:56:38.669555   60138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:56:38.669724   60138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:56:38.669949   60138 out.go:352] Setting JSON to false
	I0924 00:56:38.670049   60138 mustload.go:65] Loading cluster: embed-certs-650507
	I0924 00:56:38.670464   60138 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:56:38.670535   60138 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/config.json ...
	I0924 00:56:38.670703   60138 mustload.go:65] Loading cluster: embed-certs-650507
	I0924 00:56:38.670801   60138 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:56:38.670829   60138 stop.go:39] StopHost: embed-certs-650507
	I0924 00:56:38.671191   60138 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:56:38.671239   60138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:56:38.686354   60138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0924 00:56:38.686854   60138 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:56:38.687460   60138 main.go:141] libmachine: Using API Version  1
	I0924 00:56:38.687480   60138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:56:38.687887   60138 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:56:38.690601   60138 out.go:177] * Stopping node "embed-certs-650507"  ...
	I0924 00:56:38.691921   60138 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 00:56:38.691964   60138 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 00:56:38.692273   60138 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 00:56:38.692309   60138 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 00:56:38.695277   60138 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 00:56:38.695715   60138 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 01:55:43 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 00:56:38.695751   60138 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 00:56:38.695908   60138 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 00:56:38.696123   60138 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 00:56:38.696388   60138 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 00:56:38.696566   60138 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 00:56:38.782494   60138 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 00:56:38.846059   60138 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 00:56:38.917392   60138 main.go:141] libmachine: Stopping "embed-certs-650507"...
	I0924 00:56:38.917424   60138 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 00:56:38.919358   60138 main.go:141] libmachine: (embed-certs-650507) Calling .Stop
	I0924 00:56:38.923454   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 0/120
	I0924 00:56:39.925166   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 1/120
	I0924 00:56:40.927605   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 2/120
	I0924 00:56:41.929166   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 3/120
	I0924 00:56:42.931087   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 4/120
	I0924 00:56:43.933727   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 5/120
	I0924 00:56:44.935290   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 6/120
	I0924 00:56:45.937153   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 7/120
	I0924 00:56:46.939614   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 8/120
	I0924 00:56:47.941370   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 9/120
	I0924 00:56:48.942755   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 10/120
	I0924 00:56:49.944382   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 11/120
	I0924 00:56:50.946151   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 12/120
	I0924 00:56:51.947665   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 13/120
	I0924 00:56:52.949344   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 14/120
	I0924 00:56:53.951114   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 15/120
	I0924 00:56:54.952809   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 16/120
	I0924 00:56:55.954858   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 17/120
	I0924 00:56:56.957673   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 18/120
	I0924 00:56:57.959327   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 19/120
	I0924 00:56:58.961971   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 20/120
	I0924 00:56:59.963757   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 21/120
	I0924 00:57:00.965839   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 22/120
	I0924 00:57:01.967541   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 23/120
	I0924 00:57:02.969135   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 24/120
	I0924 00:57:03.971422   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 25/120
	I0924 00:57:04.973551   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 26/120
	I0924 00:57:05.975368   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 27/120
	I0924 00:57:06.977314   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 28/120
	I0924 00:57:07.978992   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 29/120
	I0924 00:57:08.980911   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 30/120
	I0924 00:57:09.982468   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 31/120
	I0924 00:57:10.984176   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 32/120
	I0924 00:57:11.986030   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 33/120
	I0924 00:57:12.987408   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 34/120
	I0924 00:57:13.989562   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 35/120
	I0924 00:57:14.991210   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 36/120
	I0924 00:57:15.993095   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 37/120
	I0924 00:57:16.995456   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 38/120
	I0924 00:57:17.997720   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 39/120
	I0924 00:57:18.999845   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 40/120
	I0924 00:57:20.002115   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 41/120
	I0924 00:57:21.003477   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 42/120
	I0924 00:57:22.004960   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 43/120
	I0924 00:57:23.006447   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 44/120
	I0924 00:57:24.008129   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 45/120
	I0924 00:57:25.010360   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 46/120
	I0924 00:57:26.011607   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 47/120
	I0924 00:57:27.013244   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 48/120
	I0924 00:57:28.014646   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 49/120
	I0924 00:57:29.016961   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 50/120
	I0924 00:57:30.018194   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 51/120
	I0924 00:57:31.019837   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 52/120
	I0924 00:57:32.021350   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 53/120
	I0924 00:57:33.022891   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 54/120
	I0924 00:57:34.024805   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 55/120
	I0924 00:57:35.027311   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 56/120
	I0924 00:57:36.029051   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 57/120
	I0924 00:57:37.031036   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 58/120
	I0924 00:57:38.032946   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 59/120
	I0924 00:57:39.035368   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 60/120
	I0924 00:57:40.036919   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 61/120
	I0924 00:57:41.038880   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 62/120
	I0924 00:57:42.040586   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 63/120
	I0924 00:57:43.041896   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 64/120
	I0924 00:57:44.044410   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 65/120
	I0924 00:57:45.045719   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 66/120
	I0924 00:57:46.047195   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 67/120
	I0924 00:57:47.048630   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 68/120
	I0924 00:57:48.050213   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 69/120
	I0924 00:57:49.052658   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 70/120
	I0924 00:57:50.054071   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 71/120
	I0924 00:57:51.055723   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 72/120
	I0924 00:57:52.057347   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 73/120
	I0924 00:57:53.058948   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 74/120
	I0924 00:57:54.061280   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 75/120
	I0924 00:57:55.062913   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 76/120
	I0924 00:57:56.064563   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 77/120
	I0924 00:57:57.066173   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 78/120
	I0924 00:57:58.067978   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 79/120
	I0924 00:57:59.069384   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 80/120
	I0924 00:58:00.071219   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 81/120
	I0924 00:58:01.073056   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 82/120
	I0924 00:58:02.074874   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 83/120
	I0924 00:58:03.076640   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 84/120
	I0924 00:58:04.078591   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 85/120
	I0924 00:58:05.080566   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 86/120
	I0924 00:58:06.082033   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 87/120
	I0924 00:58:07.083376   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 88/120
	I0924 00:58:08.084685   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 89/120
	I0924 00:58:09.086865   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 90/120
	I0924 00:58:10.088105   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 91/120
	I0924 00:58:11.089537   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 92/120
	I0924 00:58:12.090998   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 93/120
	I0924 00:58:13.092380   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 94/120
	I0924 00:58:14.094432   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 95/120
	I0924 00:58:15.095794   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 96/120
	I0924 00:58:16.097275   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 97/120
	I0924 00:58:17.098935   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 98/120
	I0924 00:58:18.100383   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 99/120
	I0924 00:58:19.101715   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 100/120
	I0924 00:58:20.103327   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 101/120
	I0924 00:58:21.104892   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 102/120
	I0924 00:58:22.106253   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 103/120
	I0924 00:58:23.107789   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 104/120
	I0924 00:58:24.109972   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 105/120
	I0924 00:58:25.111549   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 106/120
	I0924 00:58:26.112994   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 107/120
	I0924 00:58:27.114212   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 108/120
	I0924 00:58:28.115631   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 109/120
	I0924 00:58:29.116924   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 110/120
	I0924 00:58:30.118473   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 111/120
	I0924 00:58:31.119877   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 112/120
	I0924 00:58:32.121636   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 113/120
	I0924 00:58:33.123018   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 114/120
	I0924 00:58:34.125207   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 115/120
	I0924 00:58:35.126680   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 116/120
	I0924 00:58:36.128070   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 117/120
	I0924 00:58:37.129540   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 118/120
	I0924 00:58:38.131086   60138 main.go:141] libmachine: (embed-certs-650507) Waiting for machine to stop 119/120
	I0924 00:58:39.131838   60138 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 00:58:39.131909   60138 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 00:58:39.134106   60138 out.go:201] 
	W0924 00:58:39.135881   60138 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 00:58:39.135905   60138 out.go:270] * 
	* 
	W0924 00:58:39.138489   60138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 00:58:39.139840   60138 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-650507 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507: exit status 3 (18.666494878s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:58:57.808664   60865 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0924 00:58:57.808684   60865 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-650507" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-465341 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-465341 --alsologtostderr -v=3: exit status 82 (2m0.504591127s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-465341"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:57:37.751965   60591 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:57:37.752093   60591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:57:37.752105   60591 out.go:358] Setting ErrFile to fd 2...
	I0924 00:57:37.752112   60591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:57:37.752315   60591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:57:37.752627   60591 out.go:352] Setting JSON to false
	I0924 00:57:37.752730   60591 mustload.go:65] Loading cluster: default-k8s-diff-port-465341
	I0924 00:57:37.753119   60591 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:57:37.753203   60591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/config.json ...
	I0924 00:57:37.753403   60591 mustload.go:65] Loading cluster: default-k8s-diff-port-465341
	I0924 00:57:37.753535   60591 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:57:37.753582   60591 stop.go:39] StopHost: default-k8s-diff-port-465341
	I0924 00:57:37.753999   60591 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 00:57:37.754042   60591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:57:37.769773   60591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I0924 00:57:37.770278   60591 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:57:37.770923   60591 main.go:141] libmachine: Using API Version  1
	I0924 00:57:37.770946   60591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:57:37.771264   60591 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:57:37.773724   60591 out.go:177] * Stopping node "default-k8s-diff-port-465341"  ...
	I0924 00:57:37.775027   60591 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0924 00:57:37.775065   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 00:57:37.775278   60591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0924 00:57:37.775309   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 00:57:37.778117   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 00:57:37.778507   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 01:56:45 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 00:57:37.778529   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 00:57:37.778685   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 00:57:37.778867   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 00:57:37.778999   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 00:57:37.779167   60591 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 00:57:37.874277   60591 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0924 00:57:37.929366   60591 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0924 00:57:37.994292   60591 main.go:141] libmachine: Stopping "default-k8s-diff-port-465341"...
	I0924 00:57:37.994317   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 00:57:37.996098   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Stop
	I0924 00:57:37.999951   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 0/120
	I0924 00:57:39.001822   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 1/120
	I0924 00:57:40.003531   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 2/120
	I0924 00:57:41.005608   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 3/120
	I0924 00:57:42.007191   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 4/120
	I0924 00:57:43.010032   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 5/120
	I0924 00:57:44.012023   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 6/120
	I0924 00:57:45.013543   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 7/120
	I0924 00:57:46.015341   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 8/120
	I0924 00:57:47.016935   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 9/120
	I0924 00:57:48.018401   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 10/120
	I0924 00:57:49.019931   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 11/120
	I0924 00:57:50.021616   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 12/120
	I0924 00:57:51.023153   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 13/120
	I0924 00:57:52.024688   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 14/120
	I0924 00:57:53.026990   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 15/120
	I0924 00:57:54.028404   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 16/120
	I0924 00:57:55.029975   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 17/120
	I0924 00:57:56.031694   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 18/120
	I0924 00:57:57.033393   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 19/120
	I0924 00:57:58.036214   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 20/120
	I0924 00:57:59.037642   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 21/120
	I0924 00:58:00.038909   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 22/120
	I0924 00:58:01.040262   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 23/120
	I0924 00:58:02.041788   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 24/120
	I0924 00:58:03.043953   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 25/120
	I0924 00:58:04.045752   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 26/120
	I0924 00:58:05.047328   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 27/120
	I0924 00:58:06.048911   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 28/120
	I0924 00:58:07.050306   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 29/120
	I0924 00:58:08.051925   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 30/120
	I0924 00:58:09.053473   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 31/120
	I0924 00:58:10.054972   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 32/120
	I0924 00:58:11.057012   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 33/120
	I0924 00:58:12.058449   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 34/120
	I0924 00:58:13.061179   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 35/120
	I0924 00:58:14.062585   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 36/120
	I0924 00:58:15.064020   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 37/120
	I0924 00:58:16.065794   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 38/120
	I0924 00:58:17.067402   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 39/120
	I0924 00:58:18.069103   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 40/120
	I0924 00:58:19.070894   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 41/120
	I0924 00:58:20.072399   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 42/120
	I0924 00:58:21.073908   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 43/120
	I0924 00:58:22.075226   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 44/120
	I0924 00:58:23.077256   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 45/120
	I0924 00:58:24.078962   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 46/120
	I0924 00:58:25.080684   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 47/120
	I0924 00:58:26.082199   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 48/120
	I0924 00:58:27.083659   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 49/120
	I0924 00:58:28.086060   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 50/120
	I0924 00:58:29.087676   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 51/120
	I0924 00:58:30.089307   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 52/120
	I0924 00:58:31.090737   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 53/120
	I0924 00:58:32.092259   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 54/120
	I0924 00:58:33.094641   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 55/120
	I0924 00:58:34.096431   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 56/120
	I0924 00:58:35.097871   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 57/120
	I0924 00:58:36.099319   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 58/120
	I0924 00:58:37.101039   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 59/120
	I0924 00:58:38.102426   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 60/120
	I0924 00:58:39.104158   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 61/120
	I0924 00:58:40.105696   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 62/120
	I0924 00:58:41.107388   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 63/120
	I0924 00:58:42.108923   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 64/120
	I0924 00:58:43.111126   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 65/120
	I0924 00:58:44.112685   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 66/120
	I0924 00:58:45.114144   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 67/120
	I0924 00:58:46.115817   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 68/120
	I0924 00:58:47.117364   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 69/120
	I0924 00:58:48.118825   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 70/120
	I0924 00:58:49.120387   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 71/120
	I0924 00:58:50.122059   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 72/120
	I0924 00:58:51.123561   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 73/120
	I0924 00:58:52.125142   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 74/120
	I0924 00:58:53.127541   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 75/120
	I0924 00:58:54.129160   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 76/120
	I0924 00:58:55.130694   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 77/120
	I0924 00:58:56.132125   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 78/120
	I0924 00:58:57.133699   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 79/120
	I0924 00:58:58.135180   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 80/120
	I0924 00:58:59.137917   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 81/120
	I0924 00:59:00.139538   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 82/120
	I0924 00:59:01.141023   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 83/120
	I0924 00:59:02.142835   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 84/120
	I0924 00:59:03.145025   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 85/120
	I0924 00:59:04.146487   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 86/120
	I0924 00:59:05.147777   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 87/120
	I0924 00:59:06.149286   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 88/120
	I0924 00:59:07.151021   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 89/120
	I0924 00:59:08.153435   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 90/120
	I0924 00:59:09.155175   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 91/120
	I0924 00:59:10.156935   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 92/120
	I0924 00:59:11.158384   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 93/120
	I0924 00:59:12.160071   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 94/120
	I0924 00:59:13.162397   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 95/120
	I0924 00:59:14.164052   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 96/120
	I0924 00:59:15.165628   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 97/120
	I0924 00:59:16.167068   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 98/120
	I0924 00:59:17.168662   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 99/120
	I0924 00:59:18.170798   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 100/120
	I0924 00:59:19.172669   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 101/120
	I0924 00:59:20.175189   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 102/120
	I0924 00:59:21.176718   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 103/120
	I0924 00:59:22.179198   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 104/120
	I0924 00:59:23.181670   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 105/120
	I0924 00:59:24.183092   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 106/120
	I0924 00:59:25.184604   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 107/120
	I0924 00:59:26.186135   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 108/120
	I0924 00:59:27.187670   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 109/120
	I0924 00:59:28.189148   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 110/120
	I0924 00:59:29.190842   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 111/120
	I0924 00:59:30.192612   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 112/120
	I0924 00:59:31.194254   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 113/120
	I0924 00:59:32.195768   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 114/120
	I0924 00:59:33.198448   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 115/120
	I0924 00:59:34.199880   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 116/120
	I0924 00:59:35.201595   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 117/120
	I0924 00:59:36.203081   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 118/120
	I0924 00:59:37.204866   60591 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for machine to stop 119/120
	I0924 00:59:38.205563   60591 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0924 00:59:38.205633   60591 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0924 00:59:38.207654   60591 out.go:201] 
	W0924 00:59:38.209075   60591 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0924 00:59:38.209105   60591 out.go:270] * 
	* 
	W0924 00:59:38.211736   60591 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 00:59:38.213176   60591 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-465341 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341: exit status 3 (18.474109288s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:59:56.688723   61472 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.186:22: connect: no route to host
	E0924 00:59:56.688753   61472 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-465341" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057: exit status 3 (3.167883407s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:58:49.456683   60929 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	E0924 00:58:49.456706   60929 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-674057 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-674057 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152492164s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-674057 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057: exit status 3 (3.063379995s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:58:58.672754   60994 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host
	E0924 00:58:58.672778   60994 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.161:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-674057" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507: exit status 3 (3.16803826s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:59:00.976765   61040 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0924 00:59:00.976791   61040 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-650507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-650507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157552838s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-650507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507: exit status 3 (3.05793126s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:59:10.192722   61162 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0924 00:59:10.192744   61162 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-650507" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-171598 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-171598 create -f testdata/busybox.yaml: exit status 1 (44.875709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-171598" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-171598 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 6 (213.6932ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:59:08.251405   61233 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-171598" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 6 (222.578308ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:59:08.472219   61263 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-171598" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-171598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-171598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m28.7254206s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-171598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-171598 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-171598 describe deploy/metrics-server -n kube-system: exit status 1 (44.624314ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-171598" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-171598 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 6 (216.104444ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 01:00:37.460198   61842 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-171598" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341: exit status 3 (3.168053356s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:59:59.856666   61585 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.186:22: connect: no route to host
	E0924 00:59:59.856689   61585 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.186:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-465341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-465341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153619241s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.186:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-465341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341: exit status 3 (3.0618415s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 01:00:09.072763   61667 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.186:22: connect: no route to host
	E0924 01:00:09.072785   61667 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-465341" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (726.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-171598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0924 01:00:43.334619   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:02:06.411984   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:03:38.361813   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:05:43.332755   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:08:38.362242   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-171598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m2.841928161s)

                                                
                                                
-- stdout --
	* [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 01:00:40.983605   61989 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:00:40.983716   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983722   61989 out.go:358] Setting ErrFile to fd 2...
	I0924 01:00:40.983728   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983918   61989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:00:40.984500   61989 out.go:352] Setting JSON to false
	I0924 01:00:40.985412   61989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6185,"bootTime":1727133456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:00:40.985513   61989 start.go:139] virtualization: kvm guest
	I0924 01:00:40.987848   61989 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:00:40.989366   61989 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:00:40.989467   61989 notify.go:220] Checking for updates...
	I0924 01:00:40.992462   61989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:00:40.994144   61989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:00:40.995782   61989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:00:40.997503   61989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:00:40.999038   61989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:00:41.000959   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:00:41.001315   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.001388   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.017304   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0924 01:00:41.017751   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.018320   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.018355   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.018708   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.018964   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.021075   61989 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:00:41.022764   61989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:00:41.023156   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.023204   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.038764   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0924 01:00:41.039238   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.039828   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.039856   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.040272   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.040569   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.078622   61989 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:00:41.079930   61989 start.go:297] selected driver: kvm2
	I0924 01:00:41.079945   61989 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.080076   61989 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:00:41.080841   61989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.080927   61989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:00:41.096851   61989 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:00:41.097306   61989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:00:41.097345   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:00:41.097410   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:00:41.097465   61989 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.097610   61989 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.099797   61989 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 01:00:41.101644   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:00:41.101691   61989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 01:00:41.101704   61989 cache.go:56] Caching tarball of preloaded images
	I0924 01:00:41.101801   61989 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:00:41.101816   61989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 01:00:41.101922   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:00:41.102126   61989 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:04:16.233175   61989 start.go:364] duration metric: took 3m35.131018203s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 01:04:16.233254   61989 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:16.233262   61989 fix.go:54] fixHost starting: 
	I0924 01:04:16.233733   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:16.233787   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:16.255690   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0924 01:04:16.256135   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:16.256729   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:04:16.256763   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:16.257122   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:16.257365   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:16.257560   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 01:04:16.259055   61989 fix.go:112] recreateIfNeeded on old-k8s-version-171598: state=Stopped err=<nil>
	I0924 01:04:16.259091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	W0924 01:04:16.259266   61989 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:16.261327   61989 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	I0924 01:04:16.262929   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .Start
	I0924 01:04:16.263123   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 01:04:16.264062   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 01:04:16.264543   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 01:04:16.264954   61989 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 01:04:16.265899   61989 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 01:04:17.566157   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 01:04:17.567223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.567644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.567724   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.567625   62886 retry.go:31] will retry after 301.652575ms: waiting for machine to come up
	I0924 01:04:17.871163   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.871700   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.871729   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.871645   62886 retry.go:31] will retry after 337.632324ms: waiting for machine to come up
	I0924 01:04:18.211081   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.211954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.212013   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.211892   62886 retry.go:31] will retry after 431.70455ms: waiting for machine to come up
	I0924 01:04:18.645408   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.646017   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.646044   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.645958   62886 retry.go:31] will retry after 582.966569ms: waiting for machine to come up
	I0924 01:04:19.230457   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.230954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.230980   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.230897   62886 retry.go:31] will retry after 720.62326ms: waiting for machine to come up
	I0924 01:04:19.953023   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.953570   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.953603   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.953512   62886 retry.go:31] will retry after 688.597177ms: waiting for machine to come up
	I0924 01:04:20.644150   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:20.644636   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:20.644672   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:20.644578   62886 retry.go:31] will retry after 1.084671138s: waiting for machine to come up
	I0924 01:04:21.730823   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:21.731385   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:21.731411   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:21.731351   62886 retry.go:31] will retry after 1.051424847s: waiting for machine to come up
	I0924 01:04:22.784644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:22.785194   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:22.785223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:22.785138   62886 retry.go:31] will retry after 1.750498954s: waiting for machine to come up
	I0924 01:04:24.537680   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:24.538085   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:24.538109   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:24.538039   62886 retry.go:31] will retry after 2.015183238s: waiting for machine to come up
	I0924 01:04:26.555221   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:26.555674   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:26.555695   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:26.555634   62886 retry.go:31] will retry after 2.568414115s: waiting for machine to come up
	I0924 01:04:29.127625   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:29.128130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:29.128149   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:29.128108   62886 retry.go:31] will retry after 2.207252231s: waiting for machine to come up
	I0924 01:04:31.337368   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:31.338025   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:31.338128   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:31.338011   62886 retry.go:31] will retry after 4.137847727s: waiting for machine to come up
	I0924 01:04:35.478410   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.478991   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.479016   61989 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 01:04:35.479029   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 01:04:35.479586   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.479607   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 01:04:35.479626   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | skip adding static IP to network mk-old-k8s-version-171598 - found existing host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"}
	I0924 01:04:35.479643   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 01:04:35.479659   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 01:04:35.482028   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482377   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.482419   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 01:04:35.482550   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 01:04:35.482585   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:35.482600   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 01:04:35.482614   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 01:04:35.613364   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:35.613847   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 01:04:35.614543   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:35.617366   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.617742   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.617774   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.618068   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:04:35.618260   61989 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:35.618279   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:35.618489   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.621130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621472   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.621497   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621722   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.621914   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622354   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.622558   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.622749   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.622760   61989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:35.736637   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:35.736661   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.736943   61989 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 01:04:35.736973   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.737151   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.739921   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740304   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.740362   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740502   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.740678   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740851   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.741218   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.741409   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.741423   61989 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 01:04:35.866963   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 01:04:35.866994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.870342   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.870860   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.870893   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.871145   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.871406   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871638   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871850   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.872050   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.872253   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.872276   61989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:35.998933   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:35.998962   61989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:35.998983   61989 buildroot.go:174] setting up certificates
	I0924 01:04:35.998994   61989 provision.go:84] configureAuth start
	I0924 01:04:35.999005   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.999359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.002499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003027   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.003052   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003167   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.005508   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005773   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.005796   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005909   61989 provision.go:143] copyHostCerts
	I0924 01:04:36.005967   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:36.005986   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:36.006037   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:36.006129   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:36.006137   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:36.006156   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:36.006209   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:36.006216   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:36.006237   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:36.006310   61989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 01:04:36.084609   61989 provision.go:177] copyRemoteCerts
	I0924 01:04:36.084671   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:36.084698   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.087740   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088046   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.088075   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088278   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.088523   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.088716   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.088854   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.178597   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:36.202768   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:04:36.225933   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:36.250014   61989 provision.go:87] duration metric: took 251.005829ms to configureAuth
	I0924 01:04:36.250046   61989 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:36.250369   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:04:36.250453   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.253290   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.253912   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.253943   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.254242   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.254474   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254650   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254764   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.254958   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.255124   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.255138   61989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:36.472324   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:36.472381   61989 machine.go:96] duration metric: took 854.106776ms to provisionDockerMachine
	I0924 01:04:36.472401   61989 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 01:04:36.472419   61989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:36.472451   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.472814   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:36.472849   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.475567   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.475941   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.475969   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.476125   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.476403   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.476614   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.476831   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.562688   61989 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:36.566476   61989 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:36.566501   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:36.566561   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:36.566635   61989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:36.566724   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:36.576132   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:36.599696   61989 start.go:296] duration metric: took 127.276787ms for postStartSetup
	I0924 01:04:36.599738   61989 fix.go:56] duration metric: took 20.366477202s for fixHost
	I0924 01:04:36.599763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.603462   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.603836   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.603867   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.604057   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.604500   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604721   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604878   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.605041   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.605285   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.605303   61989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:36.717061   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139876.688490589
	
	I0924 01:04:36.717091   61989 fix.go:216] guest clock: 1727139876.688490589
	I0924 01:04:36.717102   61989 fix.go:229] Guest: 2024-09-24 01:04:36.688490589 +0000 UTC Remote: 2024-09-24 01:04:36.599742488 +0000 UTC m=+235.652611441 (delta=88.748101ms)
	I0924 01:04:36.717157   61989 fix.go:200] guest clock delta is within tolerance: 88.748101ms
	I0924 01:04:36.717165   61989 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 20.483937438s
	I0924 01:04:36.717199   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.717499   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.720466   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.720959   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.720986   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.721189   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721965   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.722073   61989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:36.722118   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.722187   61989 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:36.722215   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.725171   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725384   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725669   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.725694   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725858   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.725970   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.726016   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.726065   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726249   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.726254   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.726494   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726513   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.726657   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.727049   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.845385   61989 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:36.853307   61989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:37.001850   61989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:37.009873   61989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:37.009948   61989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:37.032269   61989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:37.032299   61989 start.go:495] detecting cgroup driver to use...
	I0924 01:04:37.032403   61989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:37.056250   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:37.072827   61989 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:37.072903   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:37.090639   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:37.107525   61989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:37.235495   61989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:37.410971   61989 docker.go:233] disabling docker service ...
	I0924 01:04:37.411034   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:37.427815   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:37.444121   61989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:37.568933   61989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:37.700008   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:37.715529   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:37.736908   61989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 01:04:37.736980   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.748540   61989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:37.748590   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.759301   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.771008   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.782080   61989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:37.793756   61989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:37.803444   61989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:37.803525   61989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:37.818012   61989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:37.829019   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:37.978885   61989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:38.086263   61989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:38.086353   61989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:38.093479   61989 start.go:563] Will wait 60s for crictl version
	I0924 01:04:38.093573   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:38.097486   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:38.138781   61989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:38.138872   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.166832   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.199764   61989 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 01:04:38.201359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:38.204699   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205122   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:38.205152   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205408   61989 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:38.209456   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:38.222128   61989 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:38.222254   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:04:38.222300   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:38.276802   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:38.276864   61989 ssh_runner.go:195] Run: which lz4
	I0924 01:04:38.280989   61989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:38.285108   61989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:38.285138   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 01:04:39.903777   61989 crio.go:462] duration metric: took 1.62282331s to copy over tarball
	I0924 01:04:39.903900   61989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:42.944929   61989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040984911s)
	I0924 01:04:42.944969   61989 crio.go:469] duration metric: took 3.041152253s to extract the tarball
	I0924 01:04:42.944981   61989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:42.988315   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:43.036011   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:43.036045   61989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:43.036151   61989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.036194   61989 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 01:04:43.036211   61989 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.036281   61989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.036301   61989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.036344   61989 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.036310   61989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.036577   61989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038440   61989 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.038458   61989 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 01:04:43.038482   61989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.038502   61989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.038554   61989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.038588   61989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.038600   61989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038816   61989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.306768   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.309660   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 01:04:43.312684   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.314551   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.317719   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.326063   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.378736   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.405508   61989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 01:04:43.405585   61989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.405648   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.452908   61989 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 01:04:43.452954   61989 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 01:04:43.453006   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471293   61989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 01:04:43.471341   61989 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 01:04:43.471347   61989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.471370   61989 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.471297   61989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 01:04:43.471406   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471421   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471423   61989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.471462   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.494687   61989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 01:04:43.494735   61989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.494782   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508206   61989 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 01:04:43.508253   61989 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.508278   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.508298   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508363   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.508419   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.508451   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.508487   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.508547   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.645995   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.646039   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.646098   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.646152   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.646261   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.646337   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.646413   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817326   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.817416   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.817381   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.817508   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817449   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.817597   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.817686   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.972782   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.972792   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 01:04:43.972869   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 01:04:43.972838   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 01:04:43.972928   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 01:04:43.972944   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 01:04:43.973027   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 01:04:44.008191   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 01:04:44.220628   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:44.364297   61989 cache_images.go:92] duration metric: took 1.328227964s to LoadCachedImages
	W0924 01:04:44.364505   61989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0924 01:04:44.364539   61989 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 01:04:44.364681   61989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:44.364824   61989 ssh_runner.go:195] Run: crio config
	I0924 01:04:44.423360   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:04:44.423382   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:44.423393   61989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:44.423412   61989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:04:44.423593   61989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:44.423671   61989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:04:44.434069   61989 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:44.434143   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:44.443807   61989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 01:04:44.463473   61989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:44.480449   61989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 01:04:44.498520   61989 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:44.503034   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:44.516699   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:44.643090   61989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:44.660194   61989 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 01:04:44.660216   61989 certs.go:194] generating shared ca certs ...
	I0924 01:04:44.660234   61989 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:44.660454   61989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:44.660542   61989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:44.660559   61989 certs.go:256] generating profile certs ...
	I0924 01:04:44.660682   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 01:04:44.660755   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 01:04:44.660816   61989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 01:04:44.660976   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:44.661014   61989 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:44.661026   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:44.661071   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:44.661104   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:44.661133   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:44.661211   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:44.662130   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:44.710279   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:44.736824   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:44.773120   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:44.801137   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:04:44.844946   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:44.880871   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:44.908630   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:44.947148   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:44.971925   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:45.000519   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:45.034167   61989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:45.054932   61989 ssh_runner.go:195] Run: openssl version
	I0924 01:04:45.062733   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:45.076993   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082104   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082175   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.088219   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:45.099211   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:45.111178   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116551   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116624   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.122353   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:45.133490   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:45.144123   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150437   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150498   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.157127   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:45.168217   61989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:45.172865   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:45.179177   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:45.184987   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:45.190927   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:45.197134   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:45.203170   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:45.209550   61989 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:45.209721   61989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:45.209778   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.247564   61989 cri.go:89] found id: ""
	I0924 01:04:45.247635   61989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:45.258171   61989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:45.258195   61989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:45.258269   61989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:45.268247   61989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:45.269656   61989 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:45.270486   61989 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171598" cluster setting kubeconfig missing "old-k8s-version-171598" context setting]
	I0924 01:04:45.271918   61989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:45.277260   61989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:45.287239   61989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.3
	I0924 01:04:45.287271   61989 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:45.287281   61989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:45.287325   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.327991   61989 cri.go:89] found id: ""
	I0924 01:04:45.328071   61989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:45.344693   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:45.354414   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:45.354439   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:45.354499   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:45.363765   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:45.363838   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:45.373569   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:45.382401   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:45.382464   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:45.392710   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.402855   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:45.402919   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.413651   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:45.423818   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:45.423873   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:45.434138   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:45.444119   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:45.582409   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.245754   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.511218   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.608877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.722521   61989 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:46.722607   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.222945   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.723437   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.223704   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.723517   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.223744   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.722691   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.222927   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.723331   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.223525   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.722715   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.223281   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.723378   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.222798   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.722883   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.223279   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.723155   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.222994   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.723628   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.222908   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.722701   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.222762   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.722814   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.222671   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.722746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.222961   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.723335   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.223393   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.722739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.222765   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.722729   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.223407   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.722799   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.223381   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.723427   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.223157   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.723069   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.223400   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.723739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.223395   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.723345   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.222965   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.722795   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.222933   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.723687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.223526   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.723684   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.223275   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.723534   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.223272   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.723442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.223301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.723151   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.223174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.722780   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.222777   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.722987   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.223654   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.723449   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.223623   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.723625   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.223541   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.722702   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.222919   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.722982   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.222978   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.723547   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.223112   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.723562   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.223058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.722680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.223693   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.722716   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.223387   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.722910   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.223608   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.723144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.223442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.723025   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.723271   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.223163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.723283   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.723174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.222803   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.723029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.223679   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.723058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.223465   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.723438   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.223673   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.722674   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.223289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.723651   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.223014   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.723518   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.222860   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.723642   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.222680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.723015   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.222736   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.723185   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.223070   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.723237   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.223640   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.723622   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.222705   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.722909   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.223105   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.723166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.223286   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.723048   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.223278   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.723301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.222712   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.723191   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.223720   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.723044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.223270   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.722902   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:46.722980   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:46.781519   61989 cri.go:89] found id: ""
	I0924 01:05:46.781551   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.781565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:46.781574   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:46.781630   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:46.815990   61989 cri.go:89] found id: ""
	I0924 01:05:46.816021   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.816030   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:46.816035   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:46.816082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:46.848951   61989 cri.go:89] found id: ""
	I0924 01:05:46.848980   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.848989   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:46.848995   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:46.849062   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:46.880731   61989 cri.go:89] found id: ""
	I0924 01:05:46.880756   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.880764   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:46.880770   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:46.880832   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:46.915975   61989 cri.go:89] found id: ""
	I0924 01:05:46.916004   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.916014   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:46.916036   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:46.916105   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:46.954124   61989 cri.go:89] found id: ""
	I0924 01:05:46.954154   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.954162   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:46.954168   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:46.954233   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:46.990454   61989 cri.go:89] found id: ""
	I0924 01:05:46.990489   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.990498   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:46.990504   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:46.990573   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:47.024099   61989 cri.go:89] found id: ""
	I0924 01:05:47.024137   61989 logs.go:276] 0 containers: []
	W0924 01:05:47.024150   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:47.024161   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:47.024176   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:47.153050   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:47.153076   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:47.153109   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:47.223472   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:47.223511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:47.267699   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:47.267729   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:47.314741   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:47.314773   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:49.828972   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:49.842301   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:49.842378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:49.874632   61989 cri.go:89] found id: ""
	I0924 01:05:49.874659   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.874669   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:49.874676   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:49.874734   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:49.912500   61989 cri.go:89] found id: ""
	I0924 01:05:49.912524   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.912532   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:49.912543   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:49.912592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:49.947297   61989 cri.go:89] found id: ""
	I0924 01:05:49.947320   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.947328   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:49.947334   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:49.947395   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:49.983863   61989 cri.go:89] found id: ""
	I0924 01:05:49.983892   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.983905   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:49.983915   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:49.983977   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:50.022997   61989 cri.go:89] found id: ""
	I0924 01:05:50.023031   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.023044   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:50.023053   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:50.023109   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:50.057829   61989 cri.go:89] found id: ""
	I0924 01:05:50.057863   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.057875   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:50.057882   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:50.057929   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:50.114599   61989 cri.go:89] found id: ""
	I0924 01:05:50.114620   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.114628   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:50.114633   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:50.114677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:50.147294   61989 cri.go:89] found id: ""
	I0924 01:05:50.147326   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.147334   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:50.147345   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:50.147378   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:50.198362   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:50.198402   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:50.212381   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:50.212415   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:50.286216   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:50.286261   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:50.286279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:50.366794   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:50.366827   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:52.908167   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:52.922279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:52.922353   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:52.956677   61989 cri.go:89] found id: ""
	I0924 01:05:52.956708   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.956720   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:52.956727   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:52.956778   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:52.990933   61989 cri.go:89] found id: ""
	I0924 01:05:52.990956   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.990964   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:52.990970   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:52.991019   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:53.025729   61989 cri.go:89] found id: ""
	I0924 01:05:53.025758   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.025768   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:53.025778   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:53.025838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:53.060238   61989 cri.go:89] found id: ""
	I0924 01:05:53.060269   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.060279   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:53.060287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:53.060366   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:53.094166   61989 cri.go:89] found id: ""
	I0924 01:05:53.094200   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.094212   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:53.094220   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:53.094289   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:53.129857   61989 cri.go:89] found id: ""
	I0924 01:05:53.129884   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.129892   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:53.129898   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:53.129955   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:53.165857   61989 cri.go:89] found id: ""
	I0924 01:05:53.165890   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.165898   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:53.165909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:53.165970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:53.203884   61989 cri.go:89] found id: ""
	I0924 01:05:53.203909   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.203917   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:53.203926   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:53.203937   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:53.258001   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:53.258035   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:53.271584   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:53.271620   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:53.341791   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:53.341811   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:53.341824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:53.424126   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:53.424170   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:55.962067   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:55.977964   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:55.978042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:56.014681   61989 cri.go:89] found id: ""
	I0924 01:05:56.014716   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.014728   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:56.014736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:56.014799   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:56.062547   61989 cri.go:89] found id: ""
	I0924 01:05:56.062576   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.062587   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:56.062606   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:56.062665   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:56.100938   61989 cri.go:89] found id: ""
	I0924 01:05:56.100960   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.100969   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:56.100974   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:56.101039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:56.137694   61989 cri.go:89] found id: ""
	I0924 01:05:56.137722   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.137737   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:56.137744   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:56.137803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:56.174876   61989 cri.go:89] found id: ""
	I0924 01:05:56.174911   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.174923   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:56.174931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:56.174990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:56.208870   61989 cri.go:89] found id: ""
	I0924 01:05:56.208895   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.208905   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:56.208913   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:56.208971   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:56.242476   61989 cri.go:89] found id: ""
	I0924 01:05:56.242508   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.242520   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:56.242528   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:56.242590   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:56.276185   61989 cri.go:89] found id: ""
	I0924 01:05:56.276214   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.276255   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:56.276267   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:56.276284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:56.332755   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:56.332792   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:56.346279   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:56.346312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:56.419725   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:56.419751   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:56.419766   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:56.500173   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:56.500208   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.083761   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:59.097184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:59.097247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:59.131734   61989 cri.go:89] found id: ""
	I0924 01:05:59.131764   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.131775   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:59.131782   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:59.131842   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:59.169402   61989 cri.go:89] found id: ""
	I0924 01:05:59.169429   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.169439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:59.169446   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:59.169521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:59.208235   61989 cri.go:89] found id: ""
	I0924 01:05:59.208260   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.208290   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:59.208298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:59.208372   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:59.242314   61989 cri.go:89] found id: ""
	I0924 01:05:59.242345   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.242358   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:59.242367   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:59.242433   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:59.281300   61989 cri.go:89] found id: ""
	I0924 01:05:59.281327   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.281337   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:59.281344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:59.281407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:59.315336   61989 cri.go:89] found id: ""
	I0924 01:05:59.315369   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.315377   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:59.315386   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:59.315445   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:59.347678   61989 cri.go:89] found id: ""
	I0924 01:05:59.347708   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.347718   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:59.347726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:59.347786   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:59.381296   61989 cri.go:89] found id: ""
	I0924 01:05:59.381328   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.381340   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:59.381352   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:59.381369   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:59.462939   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:59.462971   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:59.462990   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:59.544967   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:59.545004   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.585079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:59.585106   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:59.637897   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:59.637940   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:02.153289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:02.170582   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:02.170679   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:02.216700   61989 cri.go:89] found id: ""
	I0924 01:06:02.216722   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.216730   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:02.216736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:02.216793   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:02.292664   61989 cri.go:89] found id: ""
	I0924 01:06:02.292695   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.292706   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:02.292714   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:02.292780   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:02.349447   61989 cri.go:89] found id: ""
	I0924 01:06:02.349470   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.349481   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:02.349487   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:02.349557   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:02.390491   61989 cri.go:89] found id: ""
	I0924 01:06:02.390514   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.390535   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:02.390543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:02.390597   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:02.439330   61989 cri.go:89] found id: ""
	I0924 01:06:02.439355   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.439366   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:02.439373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:02.439432   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:02.476400   61989 cri.go:89] found id: ""
	I0924 01:06:02.476431   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.476439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:02.476445   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:02.476501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:02.511946   61989 cri.go:89] found id: ""
	I0924 01:06:02.511975   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.511983   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:02.511989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:02.512036   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:02.547526   61989 cri.go:89] found id: ""
	I0924 01:06:02.547554   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.547561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:02.547570   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:02.547580   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:02.619784   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:02.619805   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:02.619816   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:02.698597   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:02.698636   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:02.741381   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:02.741419   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:02.797965   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:02.798023   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.312059   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:05.326556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:05.326614   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:05.360973   61989 cri.go:89] found id: ""
	I0924 01:06:05.360999   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.361011   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:05.361018   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:05.361101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:05.394720   61989 cri.go:89] found id: ""
	I0924 01:06:05.394750   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.394760   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:05.394767   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:05.394831   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:05.432564   61989 cri.go:89] found id: ""
	I0924 01:06:05.432592   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.432603   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:05.432611   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:05.432673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:05.465424   61989 cri.go:89] found id: ""
	I0924 01:06:05.465467   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.465478   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:05.465484   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:05.465555   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:05.503656   61989 cri.go:89] found id: ""
	I0924 01:06:05.503684   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.503693   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:05.503699   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:05.503752   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:05.538128   61989 cri.go:89] found id: ""
	I0924 01:06:05.538160   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.538171   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:05.538179   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:05.538248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:05.571310   61989 cri.go:89] found id: ""
	I0924 01:06:05.571336   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.571346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:05.571353   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:05.571416   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:05.604038   61989 cri.go:89] found id: ""
	I0924 01:06:05.604062   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.604070   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:05.604079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:05.604094   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:05.657025   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:05.657068   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.671457   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:05.671483   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:05.747671   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:05.747701   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:05.747718   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:05.833248   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:05.833285   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.372029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:08.386497   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:08.386564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:08.422998   61989 cri.go:89] found id: ""
	I0924 01:06:08.423029   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.423039   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:08.423047   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:08.423095   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:08.457009   61989 cri.go:89] found id: ""
	I0924 01:06:08.457037   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.457047   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:08.457052   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:08.457104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:08.489694   61989 cri.go:89] found id: ""
	I0924 01:06:08.489728   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.489740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:08.489750   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:08.489804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:08.521819   61989 cri.go:89] found id: ""
	I0924 01:06:08.521845   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.521856   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:08.521864   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:08.521922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:08.556422   61989 cri.go:89] found id: ""
	I0924 01:06:08.556453   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.556465   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:08.556472   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:08.556567   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:08.593802   61989 cri.go:89] found id: ""
	I0924 01:06:08.593828   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.593836   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:08.593842   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:08.593932   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:08.627569   61989 cri.go:89] found id: ""
	I0924 01:06:08.627592   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.627600   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:08.627605   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:08.627653   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:08.664728   61989 cri.go:89] found id: ""
	I0924 01:06:08.664758   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.664769   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:08.664780   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:08.664794   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.703546   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:08.703577   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:08.755612   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:08.755649   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:08.769957   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:08.769989   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:08.842732   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:08.842762   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:08.842789   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.427424   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:11.440709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:11.440773   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:11.475537   61989 cri.go:89] found id: ""
	I0924 01:06:11.475564   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.475572   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:11.475577   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:11.475633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:11.512231   61989 cri.go:89] found id: ""
	I0924 01:06:11.512276   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.512285   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:11.512292   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:11.512365   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:11.549809   61989 cri.go:89] found id: ""
	I0924 01:06:11.549840   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.549852   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:11.549858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:11.549924   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:11.587451   61989 cri.go:89] found id: ""
	I0924 01:06:11.587481   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.587493   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:11.587500   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:11.587558   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:11.625109   61989 cri.go:89] found id: ""
	I0924 01:06:11.625135   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.625146   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:11.625154   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:11.625213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:11.660577   61989 cri.go:89] found id: ""
	I0924 01:06:11.660604   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.660616   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:11.660624   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:11.660683   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:11.703527   61989 cri.go:89] found id: ""
	I0924 01:06:11.703557   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.703569   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:11.703577   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:11.703646   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:11.740766   61989 cri.go:89] found id: ""
	I0924 01:06:11.740798   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.740810   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:11.740820   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:11.740836   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:11.803402   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:11.803448   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:11.819144   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:11.819178   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:11.896152   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:11.896173   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:11.896187   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.986284   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:11.986340   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.523669   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:14.537923   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:14.537990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:14.576092   61989 cri.go:89] found id: ""
	I0924 01:06:14.576128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.576140   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:14.576148   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:14.576213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:14.611985   61989 cri.go:89] found id: ""
	I0924 01:06:14.612020   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.612032   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:14.612039   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:14.612098   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:14.647640   61989 cri.go:89] found id: ""
	I0924 01:06:14.647667   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.647675   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:14.647682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:14.647746   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:14.685089   61989 cri.go:89] found id: ""
	I0924 01:06:14.685128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.685141   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:14.685150   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:14.685217   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:14.718694   61989 cri.go:89] found id: ""
	I0924 01:06:14.718729   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.718738   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:14.718745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:14.718810   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:14.754874   61989 cri.go:89] found id: ""
	I0924 01:06:14.754916   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.754928   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:14.754936   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:14.754993   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:14.789580   61989 cri.go:89] found id: ""
	I0924 01:06:14.789608   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.789617   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:14.789625   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:14.789677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:14.823173   61989 cri.go:89] found id: ""
	I0924 01:06:14.823201   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.823213   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:14.823224   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:14.823238   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:14.878398   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:14.878431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:14.892466   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:14.892502   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:14.965978   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:14.966010   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:14.966065   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:15.050557   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:15.050600   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:17.596915   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:17.609585   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:17.609643   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:17.648275   61989 cri.go:89] found id: ""
	I0924 01:06:17.648305   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.648313   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:17.648319   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:17.648447   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:17.681447   61989 cri.go:89] found id: ""
	I0924 01:06:17.681473   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.681484   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:17.681491   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:17.681552   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:17.719202   61989 cri.go:89] found id: ""
	I0924 01:06:17.719226   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.719234   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:17.719240   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:17.719296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:17.752601   61989 cri.go:89] found id: ""
	I0924 01:06:17.752629   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.752641   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:17.752649   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:17.752700   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:17.789905   61989 cri.go:89] found id: ""
	I0924 01:06:17.789934   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.789945   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:17.789952   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:17.790015   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:17.824174   61989 cri.go:89] found id: ""
	I0924 01:06:17.824205   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.824217   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:17.824237   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:17.824296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:17.860647   61989 cri.go:89] found id: ""
	I0924 01:06:17.860674   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.860684   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:17.860691   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:17.860750   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:17.896392   61989 cri.go:89] found id: ""
	I0924 01:06:17.896414   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.896423   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:17.896437   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:17.896450   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:17.949230   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:17.949272   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:17.963125   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:17.963183   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:18.035092   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:18.035117   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:18.035134   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:18.117973   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:18.118011   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:20.657044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:20.669862   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:20.669936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:20.704672   61989 cri.go:89] found id: ""
	I0924 01:06:20.704703   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.704714   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:20.704722   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:20.704785   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:20.745777   61989 cri.go:89] found id: ""
	I0924 01:06:20.745801   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.745811   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:20.745818   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:20.745879   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:20.779673   61989 cri.go:89] found id: ""
	I0924 01:06:20.779704   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.779740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:20.779749   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:20.779809   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:20.815959   61989 cri.go:89] found id: ""
	I0924 01:06:20.815983   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.815992   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:20.815998   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:20.816055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:20.849203   61989 cri.go:89] found id: ""
	I0924 01:06:20.849232   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.849243   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:20.849251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:20.849319   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:20.884303   61989 cri.go:89] found id: ""
	I0924 01:06:20.884353   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.884365   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:20.884373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:20.884436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:20.921217   61989 cri.go:89] found id: ""
	I0924 01:06:20.921242   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.921249   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:20.921255   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:20.921302   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:20.957555   61989 cri.go:89] found id: ""
	I0924 01:06:20.957590   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.957601   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:20.957613   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:20.957628   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:20.972591   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:20.972630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:21.046506   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:21.046532   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:21.046547   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:21.129415   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:21.129453   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:21.168899   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:21.168924   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:23.720925   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:23.736893   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:23.736965   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:23.771874   61989 cri.go:89] found id: ""
	I0924 01:06:23.771901   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.771909   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:23.771915   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:23.771976   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:23.806892   61989 cri.go:89] found id: ""
	I0924 01:06:23.806924   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.806936   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:23.806943   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:23.806999   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:23.843661   61989 cri.go:89] found id: ""
	I0924 01:06:23.843686   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.843694   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:23.843700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:23.843753   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:23.878979   61989 cri.go:89] found id: ""
	I0924 01:06:23.879007   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.879019   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:23.879027   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:23.879086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:23.913893   61989 cri.go:89] found id: ""
	I0924 01:06:23.913916   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.913925   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:23.913937   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:23.913982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:23.947932   61989 cri.go:89] found id: ""
	I0924 01:06:23.947961   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.947972   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:23.947980   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:23.948045   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:23.981366   61989 cri.go:89] found id: ""
	I0924 01:06:23.981391   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.981402   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:23.981409   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:23.981467   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:24.014428   61989 cri.go:89] found id: ""
	I0924 01:06:24.014455   61989 logs.go:276] 0 containers: []
	W0924 01:06:24.014463   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:24.014471   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:24.014485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:24.029585   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:24.029621   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:24.095926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:24.095955   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:24.095975   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:24.174594   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:24.174635   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:24.213286   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:24.213311   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:26.764740   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:26.777184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:26.777279   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:26.812704   61989 cri.go:89] found id: ""
	I0924 01:06:26.812735   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.812746   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:26.812753   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:26.812811   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:26.849867   61989 cri.go:89] found id: ""
	I0924 01:06:26.849895   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.849904   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:26.849909   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:26.849958   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:26.882856   61989 cri.go:89] found id: ""
	I0924 01:06:26.882878   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.882885   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:26.882891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:26.882936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:26.921063   61989 cri.go:89] found id: ""
	I0924 01:06:26.921085   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.921094   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:26.921100   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:26.921156   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:26.961154   61989 cri.go:89] found id: ""
	I0924 01:06:26.961182   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.961194   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:26.961200   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:26.961257   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:26.994560   61989 cri.go:89] found id: ""
	I0924 01:06:26.994593   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.994603   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:26.994612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:26.994673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:27.027967   61989 cri.go:89] found id: ""
	I0924 01:06:27.028013   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.028026   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:27.028033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:27.028096   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:27.063099   61989 cri.go:89] found id: ""
	I0924 01:06:27.063130   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.063142   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:27.063153   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:27.063166   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:27.116237   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:27.116279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:27.130785   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:27.130815   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:27.201931   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:27.201954   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:27.201970   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:27.282182   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:27.282217   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:29.825403   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:29.838890   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:29.838989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:29.873651   61989 cri.go:89] found id: ""
	I0924 01:06:29.873678   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.873690   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:29.873698   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:29.873758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:29.909894   61989 cri.go:89] found id: ""
	I0924 01:06:29.909916   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.909923   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:29.909929   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:29.909978   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:29.944850   61989 cri.go:89] found id: ""
	I0924 01:06:29.944878   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.944886   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:29.944892   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:29.944945   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:29.981486   61989 cri.go:89] found id: ""
	I0924 01:06:29.981515   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.981524   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:29.981532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:29.981592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:30.015138   61989 cri.go:89] found id: ""
	I0924 01:06:30.015165   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.015176   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:30.015184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:30.015256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:30.051777   61989 cri.go:89] found id: ""
	I0924 01:06:30.051814   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.051825   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:30.051834   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:30.051898   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:30.085573   61989 cri.go:89] found id: ""
	I0924 01:06:30.085598   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.085607   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:30.085612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:30.085661   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:30.122518   61989 cri.go:89] found id: ""
	I0924 01:06:30.122551   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.122561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:30.122570   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:30.122585   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:30.199075   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:30.199118   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:30.238259   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:30.238293   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:30.292145   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:30.292185   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:30.306404   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:30.306431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:30.373959   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:32.875041   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:32.888358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:32.888435   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:32.924466   61989 cri.go:89] found id: ""
	I0924 01:06:32.924499   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.924519   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:32.924528   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:32.924584   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:32.960188   61989 cri.go:89] found id: ""
	I0924 01:06:32.960216   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.960224   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:32.960231   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:32.960282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:32.997612   61989 cri.go:89] found id: ""
	I0924 01:06:32.997641   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.997649   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:32.997655   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:32.997704   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:33.034282   61989 cri.go:89] found id: ""
	I0924 01:06:33.034310   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.034317   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:33.034325   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:33.034381   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:33.073832   61989 cri.go:89] found id: ""
	I0924 01:06:33.073861   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.073870   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:33.073875   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:33.073959   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:33.107276   61989 cri.go:89] found id: ""
	I0924 01:06:33.107303   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.107314   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:33.107323   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:33.107373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:33.141062   61989 cri.go:89] found id: ""
	I0924 01:06:33.141091   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.141104   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:33.141112   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:33.141174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:33.177874   61989 cri.go:89] found id: ""
	I0924 01:06:33.177899   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.177908   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:33.177916   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:33.177927   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:33.228324   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:33.228373   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:33.241324   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:33.241350   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:33.313115   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:33.313139   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:33.313151   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:33.392458   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:33.392512   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:35.932822   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:35.945918   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:35.945987   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:35.984400   61989 cri.go:89] found id: ""
	I0924 01:06:35.984438   61989 logs.go:276] 0 containers: []
	W0924 01:06:35.984448   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:35.984456   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:35.984528   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:36.022208   61989 cri.go:89] found id: ""
	I0924 01:06:36.022235   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.022244   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:36.022252   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:36.022336   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:36.059153   61989 cri.go:89] found id: ""
	I0924 01:06:36.059176   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.059184   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:36.059190   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:36.059247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:36.094375   61989 cri.go:89] found id: ""
	I0924 01:06:36.094413   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.094425   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:36.094434   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:36.094490   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:36.128662   61989 cri.go:89] found id: ""
	I0924 01:06:36.128691   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.128702   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:36.128710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:36.128762   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:36.160898   61989 cri.go:89] found id: ""
	I0924 01:06:36.160925   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.160937   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:36.160945   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:36.161010   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:36.194421   61989 cri.go:89] found id: ""
	I0924 01:06:36.194448   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.194460   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:36.194468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:36.194537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:36.230448   61989 cri.go:89] found id: ""
	I0924 01:06:36.230477   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.230487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:36.230498   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:36.230511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:36.303029   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:36.303053   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:36.303067   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:36.406305   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:36.406338   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:36.444044   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:36.444084   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:36.494829   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:36.494873   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.009579   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:39.023867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:39.023943   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:39.057426   61989 cri.go:89] found id: ""
	I0924 01:06:39.057458   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.057469   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:39.057477   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:39.057539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:39.091421   61989 cri.go:89] found id: ""
	I0924 01:06:39.091444   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.091453   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:39.091459   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:39.091518   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:39.125407   61989 cri.go:89] found id: ""
	I0924 01:06:39.125437   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.125448   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:39.125455   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:39.125525   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:39.157146   61989 cri.go:89] found id: ""
	I0924 01:06:39.157170   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.157181   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:39.157189   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:39.157248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:39.189474   61989 cri.go:89] found id: ""
	I0924 01:06:39.189501   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.189511   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:39.189518   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:39.189577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:39.228034   61989 cri.go:89] found id: ""
	I0924 01:06:39.228063   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.228084   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:39.228099   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:39.228158   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:39.268289   61989 cri.go:89] found id: ""
	I0924 01:06:39.268317   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.268345   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:39.268354   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:39.268431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:39.304964   61989 cri.go:89] found id: ""
	I0924 01:06:39.304988   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.304996   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:39.305005   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:39.305017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:39.356193   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:39.356234   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.370782   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:39.370807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:39.442395   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:39.442418   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:39.442429   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:39.518426   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:39.518466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:42.059895   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:42.092776   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:42.092837   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:42.128508   61989 cri.go:89] found id: ""
	I0924 01:06:42.128534   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.128555   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:42.128565   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:42.128623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:42.160961   61989 cri.go:89] found id: ""
	I0924 01:06:42.160989   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.161000   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:42.161008   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:42.161072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:42.194212   61989 cri.go:89] found id: ""
	I0924 01:06:42.194260   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.194272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:42.194280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:42.194342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:42.229284   61989 cri.go:89] found id: ""
	I0924 01:06:42.229312   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.229323   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:42.229331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:42.229378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:42.261952   61989 cri.go:89] found id: ""
	I0924 01:06:42.261986   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.261997   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:42.262010   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:42.262059   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:42.297096   61989 cri.go:89] found id: ""
	I0924 01:06:42.297125   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.297133   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:42.297139   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:42.297185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:42.333066   61989 cri.go:89] found id: ""
	I0924 01:06:42.333095   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.333106   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:42.333114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:42.333176   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:42.366798   61989 cri.go:89] found id: ""
	I0924 01:06:42.366829   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.366840   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:42.366852   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:42.366865   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:42.419424   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:42.419466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:42.433814   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:42.433846   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:42.503817   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:42.503845   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:42.503860   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:42.583249   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:42.583289   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:45.123746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:45.136292   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:45.136377   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:45.174390   61989 cri.go:89] found id: ""
	I0924 01:06:45.174420   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.174441   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:45.174449   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:45.174539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:45.212394   61989 cri.go:89] found id: ""
	I0924 01:06:45.212422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.212433   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:45.212441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:45.212503   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:45.245831   61989 cri.go:89] found id: ""
	I0924 01:06:45.245853   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.245861   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:45.245867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:45.245922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:45.277587   61989 cri.go:89] found id: ""
	I0924 01:06:45.277615   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.277626   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:45.277634   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:45.277692   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:45.309715   61989 cri.go:89] found id: ""
	I0924 01:06:45.309749   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.309760   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:45.309768   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:45.309827   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:45.342799   61989 cri.go:89] found id: ""
	I0924 01:06:45.342831   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.342844   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:45.342853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:45.342921   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:45.375377   61989 cri.go:89] found id: ""
	I0924 01:06:45.375404   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.375415   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:45.375423   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:45.375484   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:45.415395   61989 cri.go:89] found id: ""
	I0924 01:06:45.415422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.415432   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:45.415444   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:45.415459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:45.464381   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:45.464416   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:45.478142   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:45.478168   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:45.551211   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:45.551234   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:45.551244   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:45.635255   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:45.635297   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.173687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:48.186635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:48.186710   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:48.219544   61989 cri.go:89] found id: ""
	I0924 01:06:48.219566   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.219574   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:48.219583   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:48.219654   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:48.253594   61989 cri.go:89] found id: ""
	I0924 01:06:48.253618   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.253627   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:48.253634   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:48.253693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:48.287991   61989 cri.go:89] found id: ""
	I0924 01:06:48.288019   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.288031   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:48.288041   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:48.288100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:48.320738   61989 cri.go:89] found id: ""
	I0924 01:06:48.320767   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.320779   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:48.320787   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:48.320847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:48.352197   61989 cri.go:89] found id: ""
	I0924 01:06:48.352225   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.352233   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:48.352243   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:48.352317   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:48.386157   61989 cri.go:89] found id: ""
	I0924 01:06:48.386187   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.386195   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:48.386202   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:48.386250   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:48.422372   61989 cri.go:89] found id: ""
	I0924 01:06:48.422398   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.422407   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:48.422413   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:48.422463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:48.464007   61989 cri.go:89] found id: ""
	I0924 01:06:48.464032   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.464043   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:48.464054   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:48.464072   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.520533   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:48.520570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:48.594453   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:48.594489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:48.607309   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:48.607336   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:48.674078   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:48.674102   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:48.674117   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.256855   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:51.270305   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:51.270378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:51.303450   61989 cri.go:89] found id: ""
	I0924 01:06:51.303487   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.303499   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:51.303508   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:51.303564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:51.336959   61989 cri.go:89] found id: ""
	I0924 01:06:51.336987   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.337003   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:51.337010   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:51.337072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:51.369210   61989 cri.go:89] found id: ""
	I0924 01:06:51.369239   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.369249   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:51.369260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:51.369339   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:51.403595   61989 cri.go:89] found id: ""
	I0924 01:06:51.403645   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.403658   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:51.403666   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:51.403723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:51.445459   61989 cri.go:89] found id: ""
	I0924 01:06:51.445493   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.445503   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:51.445510   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:51.445574   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:51.477615   61989 cri.go:89] found id: ""
	I0924 01:06:51.477642   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.477653   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:51.477660   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:51.477722   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:51.509737   61989 cri.go:89] found id: ""
	I0924 01:06:51.509766   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.509784   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:51.509792   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:51.509856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:51.546451   61989 cri.go:89] found id: ""
	I0924 01:06:51.546479   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.546489   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:51.546501   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:51.546515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:51.600277   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:51.600315   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:51.613403   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:51.613434   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:51.691645   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:51.691669   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:51.691688   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.772276   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:51.772312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:54.313491   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:54.328265   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:54.328374   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:54.368091   61989 cri.go:89] found id: ""
	I0924 01:06:54.368117   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.368126   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:54.368131   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:54.368183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:54.408272   61989 cri.go:89] found id: ""
	I0924 01:06:54.408300   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.408310   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:54.408318   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:54.408409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:54.460467   61989 cri.go:89] found id: ""
	I0924 01:06:54.460489   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.460499   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:54.460506   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:54.460564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:54.493310   61989 cri.go:89] found id: ""
	I0924 01:06:54.493334   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.493343   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:54.493349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:54.493401   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:54.526772   61989 cri.go:89] found id: ""
	I0924 01:06:54.526799   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.526809   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:54.526817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:54.526880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:54.562235   61989 cri.go:89] found id: ""
	I0924 01:06:54.562264   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.562274   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:54.562283   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:54.562345   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:54.597755   61989 cri.go:89] found id: ""
	I0924 01:06:54.597784   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.597794   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:54.597803   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:54.597851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:54.632225   61989 cri.go:89] found id: ""
	I0924 01:06:54.632282   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.632295   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:54.632305   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:54.632321   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:54.683849   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:54.683887   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:54.697395   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:54.697425   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:54.767577   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:54.767598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:54.767609   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:54.842619   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:54.842655   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:57.381394   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:57.394078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:57.394147   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:57.431241   61989 cri.go:89] found id: ""
	I0924 01:06:57.431266   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.431278   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:57.431284   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:57.431352   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:57.468954   61989 cri.go:89] found id: ""
	I0924 01:06:57.468983   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.468994   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:57.469001   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:57.469060   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:57.503518   61989 cri.go:89] found id: ""
	I0924 01:06:57.503550   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.503562   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:57.503570   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:57.503618   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:57.540432   61989 cri.go:89] found id: ""
	I0924 01:06:57.540464   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.540475   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:57.540483   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:57.540548   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:57.574142   61989 cri.go:89] found id: ""
	I0924 01:06:57.574175   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.574187   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:57.574195   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:57.574264   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:57.608505   61989 cri.go:89] found id: ""
	I0924 01:06:57.608528   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.608537   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:57.608543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:57.608589   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:57.644273   61989 cri.go:89] found id: ""
	I0924 01:06:57.644305   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.644317   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:57.644344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:57.644409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:57.682023   61989 cri.go:89] found id: ""
	I0924 01:06:57.682050   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.682060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:57.682072   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:57.682086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:57.732537   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:57.732570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:57.746632   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:57.746663   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:57.813904   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:57.813927   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:57.813947   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:57.891947   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:57.891992   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.432035   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:00.444886   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:00.444966   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:00.482653   61989 cri.go:89] found id: ""
	I0924 01:07:00.482683   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.482694   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:00.482702   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:00.482754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:00.516404   61989 cri.go:89] found id: ""
	I0924 01:07:00.516441   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.516452   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:00.516463   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:00.516527   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:00.552938   61989 cri.go:89] found id: ""
	I0924 01:07:00.552963   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.552971   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:00.552977   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:00.553043   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:00.589143   61989 cri.go:89] found id: ""
	I0924 01:07:00.589170   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.589178   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:00.589184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:00.589235   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:00.625023   61989 cri.go:89] found id: ""
	I0924 01:07:00.625047   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.625059   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:00.625066   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:00.625127   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:00.662904   61989 cri.go:89] found id: ""
	I0924 01:07:00.662936   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.662948   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:00.662959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:00.663022   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:00.702892   61989 cri.go:89] found id: ""
	I0924 01:07:00.702921   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.702932   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:00.702938   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:00.702988   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:00.737010   61989 cri.go:89] found id: ""
	I0924 01:07:00.737039   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.737050   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:00.737061   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:00.737075   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:00.788093   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:00.788132   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:00.801354   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:00.801382   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:00.866830   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:00.866862   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:00.866878   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:00.950034   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:00.950076   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:03.492773   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:03.506158   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:03.506224   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:03.542369   61989 cri.go:89] found id: ""
	I0924 01:07:03.542397   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.542408   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:03.542416   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:03.542473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:03.575019   61989 cri.go:89] found id: ""
	I0924 01:07:03.575046   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.575055   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:03.575060   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:03.575103   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:03.608576   61989 cri.go:89] found id: ""
	I0924 01:07:03.608603   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.608612   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:03.608619   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:03.608684   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:03.642359   61989 cri.go:89] found id: ""
	I0924 01:07:03.642389   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.642400   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:03.642407   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:03.642463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:03.678192   61989 cri.go:89] found id: ""
	I0924 01:07:03.678216   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.678223   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:03.678229   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:03.678285   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:03.711773   61989 cri.go:89] found id: ""
	I0924 01:07:03.711795   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.711803   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:03.711809   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:03.711856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:03.747792   61989 cri.go:89] found id: ""
	I0924 01:07:03.747819   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.747830   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:03.747838   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:03.747901   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:03.783284   61989 cri.go:89] found id: ""
	I0924 01:07:03.783312   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.783320   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:03.783331   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:03.783349   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:03.838704   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:03.838745   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:03.852650   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:03.852675   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:03.922474   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:03.922499   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:03.922511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:03.997349   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:03.997388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.537182   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:06.549745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:06.549833   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:06.587879   61989 cri.go:89] found id: ""
	I0924 01:07:06.587910   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.587922   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:06.587930   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:06.587984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:06.623419   61989 cri.go:89] found id: ""
	I0924 01:07:06.623447   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.623456   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:06.623462   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:06.623542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:06.659228   61989 cri.go:89] found id: ""
	I0924 01:07:06.659260   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.659272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:06.659280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:06.659341   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:06.693300   61989 cri.go:89] found id: ""
	I0924 01:07:06.693330   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.693341   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:06.693349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:06.693399   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:06.726237   61989 cri.go:89] found id: ""
	I0924 01:07:06.726267   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.726278   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:06.726286   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:06.726342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:06.760627   61989 cri.go:89] found id: ""
	I0924 01:07:06.760659   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.760670   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:06.760677   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:06.760745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:06.796029   61989 cri.go:89] found id: ""
	I0924 01:07:06.796062   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.796073   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:06.796081   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:06.796136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:06.830197   61989 cri.go:89] found id: ""
	I0924 01:07:06.830230   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.830241   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:06.830251   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:06.830265   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.869055   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:06.869087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:06.923840   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:06.923888   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:06.937510   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:06.937549   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:07.011461   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:07.011482   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:07.011496   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:09.591186   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:09.603900   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:09.603970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:09.639003   61989 cri.go:89] found id: ""
	I0924 01:07:09.639035   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.639046   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:09.639055   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:09.639111   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:09.676494   61989 cri.go:89] found id: ""
	I0924 01:07:09.676528   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.676539   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:09.676547   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:09.676616   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:09.713080   61989 cri.go:89] found id: ""
	I0924 01:07:09.713103   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.713111   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:09.713117   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:09.713174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:09.748425   61989 cri.go:89] found id: ""
	I0924 01:07:09.748449   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.748458   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:09.748465   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:09.748521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:09.782526   61989 cri.go:89] found id: ""
	I0924 01:07:09.782559   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.782576   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:09.782584   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:09.782647   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:09.819137   61989 cri.go:89] found id: ""
	I0924 01:07:09.819159   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.819167   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:09.819173   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:09.819256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:09.852953   61989 cri.go:89] found id: ""
	I0924 01:07:09.852976   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.852984   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:09.852989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:09.853083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:09.887254   61989 cri.go:89] found id: ""
	I0924 01:07:09.887282   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.887293   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:09.887304   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:09.887318   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:09.940029   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:09.940069   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:09.954298   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:09.954331   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:10.028926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:10.028947   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:10.028957   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:10.116722   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:10.116761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:12.654245   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:12.668635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:12.668695   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:12.711575   61989 cri.go:89] found id: ""
	I0924 01:07:12.711601   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.711626   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:12.711632   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:12.711682   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:12.746104   61989 cri.go:89] found id: ""
	I0924 01:07:12.746131   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.746141   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:12.746149   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:12.746210   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:12.780229   61989 cri.go:89] found id: ""
	I0924 01:07:12.780260   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.780295   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:12.780303   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:12.780384   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:12.812968   61989 cri.go:89] found id: ""
	I0924 01:07:12.812998   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.813010   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:12.813024   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:12.813090   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:12.844212   61989 cri.go:89] found id: ""
	I0924 01:07:12.844241   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.844253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:12.844260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:12.844343   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:12.878662   61989 cri.go:89] found id: ""
	I0924 01:07:12.878690   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.878700   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:12.878707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:12.878765   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:12.912782   61989 cri.go:89] found id: ""
	I0924 01:07:12.912805   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.912815   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:12.912822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:12.912883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:12.945694   61989 cri.go:89] found id: ""
	I0924 01:07:12.945726   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.945736   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:12.945747   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:12.945761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:12.994841   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:12.994877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:13.009582   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:13.009624   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:13.081972   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:13.081999   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:13.082017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:13.162383   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:13.162420   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:15.704586   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:15.717608   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:15.717677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:15.751794   61989 cri.go:89] found id: ""
	I0924 01:07:15.751829   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.751840   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:15.751848   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:15.751916   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:15.791691   61989 cri.go:89] found id: ""
	I0924 01:07:15.791723   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.791734   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:15.791742   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:15.791805   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:15.827934   61989 cri.go:89] found id: ""
	I0924 01:07:15.827957   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.827965   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:15.827971   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:15.828017   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:15.862489   61989 cri.go:89] found id: ""
	I0924 01:07:15.862518   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.862527   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:15.862532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:15.862577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:15.896754   61989 cri.go:89] found id: ""
	I0924 01:07:15.896786   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.896798   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:15.896804   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:15.896857   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:15.934353   61989 cri.go:89] found id: ""
	I0924 01:07:15.934378   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.934386   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:15.934392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:15.934436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:15.969204   61989 cri.go:89] found id: ""
	I0924 01:07:15.969237   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.969246   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:15.969251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:15.969309   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:16.008733   61989 cri.go:89] found id: ""
	I0924 01:07:16.008767   61989 logs.go:276] 0 containers: []
	W0924 01:07:16.008780   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:16.008792   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:16.008807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:16.046993   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:16.047024   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:16.098768   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:16.098801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:16.114429   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:16.114472   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:16.187450   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:16.187469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:16.187489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:18.767042   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:18.779825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:18.779899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:18.815410   61989 cri.go:89] found id: ""
	I0924 01:07:18.815436   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.815447   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:18.815454   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:18.815523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:18.849837   61989 cri.go:89] found id: ""
	I0924 01:07:18.849862   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.849872   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:18.849880   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:18.849952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:18.885183   61989 cri.go:89] found id: ""
	I0924 01:07:18.885215   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.885227   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:18.885235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:18.885314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:18.922263   61989 cri.go:89] found id: ""
	I0924 01:07:18.922293   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.922305   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:18.922312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:18.922378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:18.957235   61989 cri.go:89] found id: ""
	I0924 01:07:18.957263   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.957272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:18.957278   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:18.957331   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:18.989846   61989 cri.go:89] found id: ""
	I0924 01:07:18.989870   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.989878   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:18.989884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:18.989931   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:19.027264   61989 cri.go:89] found id: ""
	I0924 01:07:19.027298   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.027308   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:19.027315   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:19.027373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:19.065902   61989 cri.go:89] found id: ""
	I0924 01:07:19.065925   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.065934   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:19.065944   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:19.065959   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:19.115515   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:19.115550   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:19.129761   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:19.129787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:19.200299   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:19.200319   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:19.200351   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:19.282308   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:19.282360   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:21.819442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:21.834106   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:21.834165   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:21.866953   61989 cri.go:89] found id: ""
	I0924 01:07:21.866988   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.866999   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:21.867008   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:21.867085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:21.902561   61989 cri.go:89] found id: ""
	I0924 01:07:21.902637   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.902654   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:21.902663   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:21.902729   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:21.936883   61989 cri.go:89] found id: ""
	I0924 01:07:21.936926   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.936937   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:21.936943   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:21.936995   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:21.975375   61989 cri.go:89] found id: ""
	I0924 01:07:21.975402   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.975411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:21.975417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:21.975465   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:22.012782   61989 cri.go:89] found id: ""
	I0924 01:07:22.012811   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.012822   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:22.012830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:22.012890   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:22.049344   61989 cri.go:89] found id: ""
	I0924 01:07:22.049370   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.049379   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:22.049385   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:22.049442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:22.088187   61989 cri.go:89] found id: ""
	I0924 01:07:22.088219   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.088230   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:22.088239   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:22.088324   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:22.123357   61989 cri.go:89] found id: ""
	I0924 01:07:22.123386   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.123397   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:22.123408   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:22.123423   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:22.176794   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:22.176828   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:22.192550   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:22.192591   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:22.263854   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:22.263881   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:22.263898   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:22.341735   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:22.341778   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:24.879834   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:24.892429   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:24.892504   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:24.926600   61989 cri.go:89] found id: ""
	I0924 01:07:24.926629   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.926636   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:24.926642   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:24.926689   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:24.960370   61989 cri.go:89] found id: ""
	I0924 01:07:24.960399   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.960408   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:24.960415   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:24.960471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:24.993503   61989 cri.go:89] found id: ""
	I0924 01:07:24.993532   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.993542   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:24.993549   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:24.993611   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:25.028027   61989 cri.go:89] found id: ""
	I0924 01:07:25.028055   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.028065   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:25.028073   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:25.028129   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:25.062947   61989 cri.go:89] found id: ""
	I0924 01:07:25.062981   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.062999   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:25.063009   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:25.063077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:25.098895   61989 cri.go:89] found id: ""
	I0924 01:07:25.098927   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.098939   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:25.098946   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:25.098996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:25.132786   61989 cri.go:89] found id: ""
	I0924 01:07:25.132814   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.132824   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:25.132830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:25.132882   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:25.167603   61989 cri.go:89] found id: ""
	I0924 01:07:25.167634   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.167644   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:25.167656   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:25.167671   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:25.220265   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:25.220303   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:25.234840   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:25.234884   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:25.307459   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:25.307485   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:25.307499   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:25.386496   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:25.386537   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:27.926064   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:27.939398   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:27.939480   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:27.976184   61989 cri.go:89] found id: ""
	I0924 01:07:27.976215   61989 logs.go:276] 0 containers: []
	W0924 01:07:27.976256   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:27.976265   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:27.976348   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:28.009389   61989 cri.go:89] found id: ""
	I0924 01:07:28.009419   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.009431   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:28.009438   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:28.009501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:28.045562   61989 cri.go:89] found id: ""
	I0924 01:07:28.045594   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.045605   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:28.045613   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:28.045677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:28.085318   61989 cri.go:89] found id: ""
	I0924 01:07:28.085345   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.085357   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:28.085364   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:28.085419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:28.119582   61989 cri.go:89] found id: ""
	I0924 01:07:28.119607   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.119617   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:28.119626   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:28.119690   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:28.151445   61989 cri.go:89] found id: ""
	I0924 01:07:28.151493   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.151505   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:28.151513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:28.151578   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:28.185966   61989 cri.go:89] found id: ""
	I0924 01:07:28.185997   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.186009   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:28.186016   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:28.186078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:28.219012   61989 cri.go:89] found id: ""
	I0924 01:07:28.219037   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.219044   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:28.219052   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:28.219089   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:28.272186   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:28.272222   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:28.286346   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:28.286383   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:28.370949   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:28.370975   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:28.370985   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:28.453740   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:28.453775   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:30.993536   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:31.006297   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:31.006369   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:31.042081   61989 cri.go:89] found id: ""
	I0924 01:07:31.042114   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.042123   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:31.042129   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:31.042185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:31.077119   61989 cri.go:89] found id: ""
	I0924 01:07:31.077144   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.077153   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:31.077159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:31.077208   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:31.110148   61989 cri.go:89] found id: ""
	I0924 01:07:31.110179   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.110187   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:31.110193   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:31.110246   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:31.143551   61989 cri.go:89] found id: ""
	I0924 01:07:31.143578   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.143585   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:31.143591   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:31.143638   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:31.177212   61989 cri.go:89] found id: ""
	I0924 01:07:31.177262   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.177272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:31.177279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:31.177329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:31.209290   61989 cri.go:89] found id: ""
	I0924 01:07:31.209321   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.209332   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:31.209340   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:31.209398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:31.247299   61989 cri.go:89] found id: ""
	I0924 01:07:31.247334   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.247346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:31.247355   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:31.247419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:31.285010   61989 cri.go:89] found id: ""
	I0924 01:07:31.285047   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.285060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:31.285072   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:31.285087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:31.323819   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:31.323855   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:31.378348   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:31.378388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:31.393944   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:31.393983   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:31.464940   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:31.464966   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:31.464978   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.042144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:34.055183   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:34.055268   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:34.103044   61989 cri.go:89] found id: ""
	I0924 01:07:34.103075   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.103086   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:34.103094   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:34.103162   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:34.141379   61989 cri.go:89] found id: ""
	I0924 01:07:34.141412   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.141424   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:34.141432   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:34.141493   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:34.179545   61989 cri.go:89] found id: ""
	I0924 01:07:34.179574   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.179582   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:34.179588   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:34.179655   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:34.217683   61989 cri.go:89] found id: ""
	I0924 01:07:34.217719   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.217739   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:34.217748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:34.217806   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:34.257597   61989 cri.go:89] found id: ""
	I0924 01:07:34.257630   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.257642   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:34.257651   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:34.257723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:34.295410   61989 cri.go:89] found id: ""
	I0924 01:07:34.295440   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.295452   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:34.295460   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:34.295523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:34.331309   61989 cri.go:89] found id: ""
	I0924 01:07:34.331340   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.331350   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:34.331358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:34.331460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:34.367549   61989 cri.go:89] found id: ""
	I0924 01:07:34.367580   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.367590   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:34.367601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:34.367615   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:34.421785   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:34.421823   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:34.435162   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:34.435198   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:34.504051   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:34.504073   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:34.504090   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.582343   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:34.582384   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.124727   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:37.139374   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:37.139431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:37.176474   61989 cri.go:89] found id: ""
	I0924 01:07:37.176500   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.176510   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:37.176515   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:37.176560   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:37.209944   61989 cri.go:89] found id: ""
	I0924 01:07:37.209971   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.209983   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:37.209990   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:37.210055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:37.242894   61989 cri.go:89] found id: ""
	I0924 01:07:37.242923   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.242933   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:37.242941   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:37.242996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:37.276517   61989 cri.go:89] found id: ""
	I0924 01:07:37.276547   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.276558   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:37.276566   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:37.276626   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:37.310169   61989 cri.go:89] found id: ""
	I0924 01:07:37.310196   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.310207   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:37.310214   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:37.310282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:37.342992   61989 cri.go:89] found id: ""
	I0924 01:07:37.343019   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.343027   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:37.343035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:37.343088   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:37.375024   61989 cri.go:89] found id: ""
	I0924 01:07:37.375051   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.375062   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:37.375069   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:37.375137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:37.409736   61989 cri.go:89] found id: ""
	I0924 01:07:37.409761   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.409768   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:37.409776   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:37.409787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:37.474744   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:37.474767   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:37.474783   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:37.551479   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:37.551515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.590597   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:37.590632   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:37.642781   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:37.642820   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.156480   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:40.171002   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:40.171079   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:40.207383   61989 cri.go:89] found id: ""
	I0924 01:07:40.207410   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.207418   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:40.207424   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:40.207474   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:40.245535   61989 cri.go:89] found id: ""
	I0924 01:07:40.245560   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.245568   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:40.245574   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:40.245620   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:40.283858   61989 cri.go:89] found id: ""
	I0924 01:07:40.283888   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.283900   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:40.283909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:40.283982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:40.320527   61989 cri.go:89] found id: ""
	I0924 01:07:40.320555   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.320566   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:40.320575   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:40.320633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:40.354364   61989 cri.go:89] found id: ""
	I0924 01:07:40.354390   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.354397   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:40.354403   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:40.354473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:40.388407   61989 cri.go:89] found id: ""
	I0924 01:07:40.388431   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.388439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:40.388444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:40.388512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:40.423809   61989 cri.go:89] found id: ""
	I0924 01:07:40.423838   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.423847   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:40.423853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:40.423908   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:40.459160   61989 cri.go:89] found id: ""
	I0924 01:07:40.459188   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.459199   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:40.459210   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:40.459223   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:40.530418   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:40.530456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.551644   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:40.551683   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:40.634564   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:40.634587   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:40.634599   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:40.717897   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:40.717934   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:43.257992   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:43.272134   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:43.272204   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:43.306747   61989 cri.go:89] found id: ""
	I0924 01:07:43.306775   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.306797   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:43.306806   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:43.306923   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:43.342922   61989 cri.go:89] found id: ""
	I0924 01:07:43.342954   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.342963   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:43.342974   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:43.343028   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:43.378666   61989 cri.go:89] found id: ""
	I0924 01:07:43.378694   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.378703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:43.378709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:43.378760   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:43.414348   61989 cri.go:89] found id: ""
	I0924 01:07:43.414376   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.414387   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:43.414395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:43.414457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:43.447687   61989 cri.go:89] found id: ""
	I0924 01:07:43.447718   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.447728   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:43.447735   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:43.447804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:43.482166   61989 cri.go:89] found id: ""
	I0924 01:07:43.482195   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.482205   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:43.482211   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:43.482275   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:43.518112   61989 cri.go:89] found id: ""
	I0924 01:07:43.518146   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.518159   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:43.518167   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:43.518231   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:43.553853   61989 cri.go:89] found id: ""
	I0924 01:07:43.553875   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.553883   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:43.553891   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:43.553902   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:43.603410   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:43.603445   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:43.616413   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:43.616438   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:43.685077   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:43.685101   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:43.685113   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:43.760758   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:43.760803   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.300532   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:46.315982   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:46.316050   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:46.356523   61989 cri.go:89] found id: ""
	I0924 01:07:46.356554   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.356565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:46.356573   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:46.356633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:46.405399   61989 cri.go:89] found id: ""
	I0924 01:07:46.405429   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.405439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:46.405447   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:46.405512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:46.454819   61989 cri.go:89] found id: ""
	I0924 01:07:46.454844   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.454853   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:46.454858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:46.454918   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:46.499094   61989 cri.go:89] found id: ""
	I0924 01:07:46.499123   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.499134   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:46.499142   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:46.499196   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:46.532976   61989 cri.go:89] found id: ""
	I0924 01:07:46.533006   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.533017   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:46.533025   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:46.533083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:46.565488   61989 cri.go:89] found id: ""
	I0924 01:07:46.565523   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.565534   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:46.565546   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:46.565610   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:46.598457   61989 cri.go:89] found id: ""
	I0924 01:07:46.598486   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.598496   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:46.598503   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:46.598551   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:46.631892   61989 cri.go:89] found id: ""
	I0924 01:07:46.631920   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.631931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:46.631941   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:46.631956   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:46.709966   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:46.710013   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.749154   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:46.749184   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:46.798192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:46.798228   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:46.811902   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:46.811951   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:46.885878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.386775   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:49.399324   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:49.399383   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:49.437061   61989 cri.go:89] found id: ""
	I0924 01:07:49.437092   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.437104   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:49.437111   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:49.437160   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:49.470882   61989 cri.go:89] found id: ""
	I0924 01:07:49.470908   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.470919   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:49.470927   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:49.470989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:49.506894   61989 cri.go:89] found id: ""
	I0924 01:07:49.506926   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.506938   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:49.506947   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:49.507018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:49.540768   61989 cri.go:89] found id: ""
	I0924 01:07:49.540800   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.540813   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:49.540822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:49.540888   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:49.576486   61989 cri.go:89] found id: ""
	I0924 01:07:49.576515   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.576523   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:49.576530   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:49.576579   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:49.612456   61989 cri.go:89] found id: ""
	I0924 01:07:49.612479   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.612487   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:49.612495   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:49.612542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:49.646085   61989 cri.go:89] found id: ""
	I0924 01:07:49.646118   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.646127   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:49.646132   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:49.646178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:49.682538   61989 cri.go:89] found id: ""
	I0924 01:07:49.682565   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.682574   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:49.682583   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:49.682594   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:49.721791   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:49.721817   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:49.774842   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:49.774889   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:49.789082   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:49.789129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:49.866437   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.866464   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:49.866478   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.445166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:52.459060   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:52.459126   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:52.496521   61989 cri.go:89] found id: ""
	I0924 01:07:52.496550   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.496562   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:52.496571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:52.496652   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:52.533575   61989 cri.go:89] found id: ""
	I0924 01:07:52.533600   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.533608   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:52.533615   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:52.533693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:52.571666   61989 cri.go:89] found id: ""
	I0924 01:07:52.571693   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.571703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:52.571710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:52.571758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:52.603929   61989 cri.go:89] found id: ""
	I0924 01:07:52.603957   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.603968   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:52.603976   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:52.604034   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:52.635581   61989 cri.go:89] found id: ""
	I0924 01:07:52.635607   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.635614   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:52.635620   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:52.635669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:52.673865   61989 cri.go:89] found id: ""
	I0924 01:07:52.673889   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.673897   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:52.673903   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:52.673953   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:52.709885   61989 cri.go:89] found id: ""
	I0924 01:07:52.709910   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.709918   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:52.709925   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:52.709986   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:52.746409   61989 cri.go:89] found id: ""
	I0924 01:07:52.746439   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.746450   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:52.746461   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:52.746475   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:52.798020   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:52.798054   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:52.811940   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:52.811967   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:52.888091   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:52.888114   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:52.888129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.968955   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:52.969000   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:55.507204   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:55.520581   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:55.520657   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:55.555772   61989 cri.go:89] found id: ""
	I0924 01:07:55.555809   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.555821   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:55.555828   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:55.555880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:55.593765   61989 cri.go:89] found id: ""
	I0924 01:07:55.593791   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.593802   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:55.593808   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:55.593866   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:55.630292   61989 cri.go:89] found id: ""
	I0924 01:07:55.630325   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.630337   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:55.630344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:55.630408   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:55.665703   61989 cri.go:89] found id: ""
	I0924 01:07:55.665730   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.665741   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:55.665748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:55.665807   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:55.701911   61989 cri.go:89] found id: ""
	I0924 01:07:55.701938   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.701949   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:55.701957   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:55.702020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:55.734343   61989 cri.go:89] found id: ""
	I0924 01:07:55.734373   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.734385   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:55.734394   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:55.734460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:55.768606   61989 cri.go:89] found id: ""
	I0924 01:07:55.768633   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.768645   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:55.768653   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:55.768716   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:55.800720   61989 cri.go:89] found id: ""
	I0924 01:07:55.800747   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.800757   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:55.800768   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:55.800782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:55.851702   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:55.851737   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:55.865657   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:55.865687   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:55.940175   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:55.940197   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:55.940207   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:56.015832   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:56.015870   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:58.557571   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:58.572208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:58.572274   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:58.605081   61989 cri.go:89] found id: ""
	I0924 01:07:58.605109   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.605121   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:58.605128   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:58.605185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:58.641518   61989 cri.go:89] found id: ""
	I0924 01:07:58.641548   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.641559   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:58.641566   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:58.641617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:58.680623   61989 cri.go:89] found id: ""
	I0924 01:07:58.680653   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.680664   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:58.680675   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:58.680735   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:58.713658   61989 cri.go:89] found id: ""
	I0924 01:07:58.713684   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.713693   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:58.713700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:58.713754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:58.746264   61989 cri.go:89] found id: ""
	I0924 01:07:58.746298   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.746307   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:58.746313   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:58.746358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:58.779812   61989 cri.go:89] found id: ""
	I0924 01:07:58.779846   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.779912   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:58.779924   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:58.779984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:58.813203   61989 cri.go:89] found id: ""
	I0924 01:07:58.813236   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.813245   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:58.813252   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:58.813303   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:58.845872   61989 cri.go:89] found id: ""
	I0924 01:07:58.845898   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.845906   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:58.845915   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:58.845925   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:58.897480   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:58.897515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:58.912904   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:58.912936   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:58.982882   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:58.982908   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:58.982921   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:59.058495   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:59.058535   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:01.596672   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:01.609550   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:01.609625   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:01.648819   61989 cri.go:89] found id: ""
	I0924 01:08:01.648847   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.648857   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:01.648864   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:01.649000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:01.685419   61989 cri.go:89] found id: ""
	I0924 01:08:01.685450   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.685458   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:01.685464   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:01.685533   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:01.720426   61989 cri.go:89] found id: ""
	I0924 01:08:01.720455   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.720464   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:01.720473   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:01.720537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:01.755292   61989 cri.go:89] found id: ""
	I0924 01:08:01.755316   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.755324   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:01.755331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:01.755398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:01.788673   61989 cri.go:89] found id: ""
	I0924 01:08:01.788703   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.788713   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:01.788721   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:01.788789   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:01.824724   61989 cri.go:89] found id: ""
	I0924 01:08:01.824761   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.824773   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:01.824781   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:01.824838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:01.858492   61989 cri.go:89] found id: ""
	I0924 01:08:01.858531   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.858542   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:01.858556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:01.858623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:01.892135   61989 cri.go:89] found id: ""
	I0924 01:08:01.892167   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.892177   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:01.892192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:01.892205   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:01.905820   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:01.905849   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:01.977998   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:01.978026   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:01.978039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:02.060441   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:02.060480   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:02.100029   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:02.100057   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.653124   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:04.665726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:04.665784   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:04.700755   61989 cri.go:89] found id: ""
	I0924 01:08:04.700785   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.700796   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:04.700804   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:04.700858   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:04.736955   61989 cri.go:89] found id: ""
	I0924 01:08:04.736983   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.736992   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:04.736998   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:04.737051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:04.770940   61989 cri.go:89] found id: ""
	I0924 01:08:04.770969   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.770977   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:04.770983   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:04.771051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:04.805376   61989 cri.go:89] found id: ""
	I0924 01:08:04.805403   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.805411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:04.805417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:04.805471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:04.840995   61989 cri.go:89] found id: ""
	I0924 01:08:04.841016   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.841024   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:04.841030   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:04.841077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:04.875418   61989 cri.go:89] found id: ""
	I0924 01:08:04.875449   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.875460   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:04.875468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:04.875546   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:04.910675   61989 cri.go:89] found id: ""
	I0924 01:08:04.910696   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.910704   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:04.910710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:04.910764   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:04.945531   61989 cri.go:89] found id: ""
	I0924 01:08:04.945562   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.945570   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:04.945578   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:04.945589   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.997696   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:04.997734   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:05.011296   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:05.011329   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:05.087878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:05.087905   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:05.087919   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:05.164073   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:05.164111   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:07.713496   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:07.726590   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:07.726649   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:07.760050   61989 cri.go:89] found id: ""
	I0924 01:08:07.760081   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.760092   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:07.760100   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:07.760152   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:07.797709   61989 cri.go:89] found id: ""
	I0924 01:08:07.797736   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.797744   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:07.797749   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:07.797803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:07.836351   61989 cri.go:89] found id: ""
	I0924 01:08:07.836380   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.836391   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:07.836399   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:07.836471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:07.871133   61989 cri.go:89] found id: ""
	I0924 01:08:07.871159   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.871170   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:07.871178   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:07.871229   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:07.906640   61989 cri.go:89] found id: ""
	I0924 01:08:07.906663   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.906673   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:07.906682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:07.906741   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:07.940919   61989 cri.go:89] found id: ""
	I0924 01:08:07.940945   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.940953   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:07.940959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:07.941018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:07.975533   61989 cri.go:89] found id: ""
	I0924 01:08:07.975562   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.975570   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:07.975576   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:07.975627   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:08.009137   61989 cri.go:89] found id: ""
	I0924 01:08:08.009163   61989 logs.go:276] 0 containers: []
	W0924 01:08:08.009173   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:08.009183   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:08.009196   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:08.065199   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:08.065252   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:08.080159   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:08.080188   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:08.154003   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:08.154025   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:08.154039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:08.235522   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:08.235561   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:10.774666   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:10.787704   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:10.787775   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:10.822721   61989 cri.go:89] found id: ""
	I0924 01:08:10.822759   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.822770   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:10.822781   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:10.822852   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:10.857113   61989 cri.go:89] found id: ""
	I0924 01:08:10.857138   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.857146   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:10.857152   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:10.857201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:10.890974   61989 cri.go:89] found id: ""
	I0924 01:08:10.891001   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.891012   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:10.891020   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:10.891086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:10.929771   61989 cri.go:89] found id: ""
	I0924 01:08:10.929793   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.929800   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:10.929806   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:10.929851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:10.961988   61989 cri.go:89] found id: ""
	I0924 01:08:10.962015   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.962027   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:10.962035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:10.962100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:10.993591   61989 cri.go:89] found id: ""
	I0924 01:08:10.993622   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.993633   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:10.993639   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:10.993691   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:11.032468   61989 cri.go:89] found id: ""
	I0924 01:08:11.032496   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.032506   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:11.032514   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:11.032576   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:11.066900   61989 cri.go:89] found id: ""
	I0924 01:08:11.066924   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.066931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:11.066939   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:11.066950   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:11.136412   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:11.136443   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:11.136459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:11.218326   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:11.218361   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:11.260695   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:11.260728   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:11.310102   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:11.310133   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:13.825540   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:13.838208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:13.838283   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:13.874539   61989 cri.go:89] found id: ""
	I0924 01:08:13.874567   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.874576   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:13.874581   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:13.874628   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:13.911818   61989 cri.go:89] found id: ""
	I0924 01:08:13.911839   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.911846   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:13.911852   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:13.911897   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:13.944766   61989 cri.go:89] found id: ""
	I0924 01:08:13.944789   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.944797   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:13.944802   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:13.944847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:13.980712   61989 cri.go:89] found id: ""
	I0924 01:08:13.980742   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.980752   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:13.980758   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:13.980817   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:14.016103   61989 cri.go:89] found id: ""
	I0924 01:08:14.016130   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.016138   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:14.016143   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:14.016192   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:14.051884   61989 cri.go:89] found id: ""
	I0924 01:08:14.051929   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.051943   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:14.051954   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:14.052046   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:14.088928   61989 cri.go:89] found id: ""
	I0924 01:08:14.088954   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.088964   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:14.088970   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:14.089020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:14.123057   61989 cri.go:89] found id: ""
	I0924 01:08:14.123083   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.123091   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:14.123099   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:14.123112   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:14.174249   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:14.174287   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:14.188409   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:14.188442   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:14.258906   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:14.258932   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:14.258942   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:14.340891   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:14.340928   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:16.877728   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:16.890548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:16.890617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:16.924414   61989 cri.go:89] found id: ""
	I0924 01:08:16.924439   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.924451   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:16.924458   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:16.924510   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:16.960295   61989 cri.go:89] found id: ""
	I0924 01:08:16.960323   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.960344   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:16.960352   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:16.960405   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:16.993171   61989 cri.go:89] found id: ""
	I0924 01:08:16.993204   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.993216   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:16.993224   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:16.993287   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:17.028122   61989 cri.go:89] found id: ""
	I0924 01:08:17.028150   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.028160   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:17.028169   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:17.028261   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:17.068401   61989 cri.go:89] found id: ""
	I0924 01:08:17.068440   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.068451   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:17.068458   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:17.068530   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:17.104250   61989 cri.go:89] found id: ""
	I0924 01:08:17.104275   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.104283   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:17.104299   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:17.104370   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:17.139178   61989 cri.go:89] found id: ""
	I0924 01:08:17.139201   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.139209   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:17.139215   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:17.139288   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:17.172677   61989 cri.go:89] found id: ""
	I0924 01:08:17.172703   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.172712   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:17.172727   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:17.172742   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:17.222039   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:17.222082   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:17.235342   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:17.235371   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:17.300313   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:17.300350   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:17.300366   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:17.382465   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:17.382517   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.924928   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:19.941406   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:19.941496   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:19.976196   61989 cri.go:89] found id: ""
	I0924 01:08:19.976224   61989 logs.go:276] 0 containers: []
	W0924 01:08:19.976238   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:19.976247   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:19.976314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:20.019652   61989 cri.go:89] found id: ""
	I0924 01:08:20.019680   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.019692   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:20.019699   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:20.019757   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:20.055098   61989 cri.go:89] found id: ""
	I0924 01:08:20.055123   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.055130   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:20.055135   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:20.055183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:20.091428   61989 cri.go:89] found id: ""
	I0924 01:08:20.091458   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.091469   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:20.091476   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:20.091532   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:20.123608   61989 cri.go:89] found id: ""
	I0924 01:08:20.123642   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.123653   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:20.123678   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:20.123745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:20.165885   61989 cri.go:89] found id: ""
	I0924 01:08:20.165913   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.165926   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:20.165934   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:20.165985   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:20.199300   61989 cri.go:89] found id: ""
	I0924 01:08:20.199329   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.199341   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:20.199348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:20.199415   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:20.237201   61989 cri.go:89] found id: ""
	I0924 01:08:20.237253   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.237262   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:20.237271   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:20.237284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:20.285008   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:20.285049   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:20.298974   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:20.299014   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:20.385765   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:20.385793   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:20.385807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:20.460715   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:20.460752   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.000163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:23.014755   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:23.014828   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:23.048877   61989 cri.go:89] found id: ""
	I0924 01:08:23.048909   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.048921   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:23.048979   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:23.049049   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:23.085614   61989 cri.go:89] found id: ""
	I0924 01:08:23.085643   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.085650   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:23.085658   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:23.085718   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:23.122027   61989 cri.go:89] found id: ""
	I0924 01:08:23.122060   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.122071   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:23.122078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:23.122136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:23.156838   61989 cri.go:89] found id: ""
	I0924 01:08:23.156868   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.156879   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:23.156887   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:23.156947   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:23.191528   61989 cri.go:89] found id: ""
	I0924 01:08:23.191569   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.191579   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:23.191586   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:23.191651   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:23.227627   61989 cri.go:89] found id: ""
	I0924 01:08:23.227651   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.227659   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:23.227665   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:23.227709   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:23.261937   61989 cri.go:89] found id: ""
	I0924 01:08:23.261968   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.261980   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:23.261988   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:23.262039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:23.297947   61989 cri.go:89] found id: ""
	I0924 01:08:23.297973   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.297986   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:23.297997   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:23.298009   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.337783   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:23.337811   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:23.390767   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:23.390808   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:23.404787   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:23.404814   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:23.478768   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:23.478788   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:23.478801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.060593   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:26.085071   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:26.085137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:26.121785   61989 cri.go:89] found id: ""
	I0924 01:08:26.121814   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.121826   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:26.121834   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:26.121900   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:26.167942   61989 cri.go:89] found id: ""
	I0924 01:08:26.167971   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.167980   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:26.167991   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:26.168054   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:26.206461   61989 cri.go:89] found id: ""
	I0924 01:08:26.206496   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.206506   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:26.206513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:26.206582   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:26.243094   61989 cri.go:89] found id: ""
	I0924 01:08:26.243125   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.243136   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:26.243144   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:26.243206   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:26.279303   61989 cri.go:89] found id: ""
	I0924 01:08:26.279331   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.279341   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:26.279348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:26.279407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:26.311840   61989 cri.go:89] found id: ""
	I0924 01:08:26.311869   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.311880   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:26.311888   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:26.311954   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:26.345994   61989 cri.go:89] found id: ""
	I0924 01:08:26.346019   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.346027   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:26.346033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:26.346082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:26.380570   61989 cri.go:89] found id: ""
	I0924 01:08:26.380601   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.380610   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:26.380619   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:26.380630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:26.429958   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:26.429993   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:26.443278   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:26.443312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:26.516353   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:26.516375   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:26.516390   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.603310   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:26.603345   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.142531   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:29.156548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:29.156634   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:29.191351   61989 cri.go:89] found id: ""
	I0924 01:08:29.191378   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.191389   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:29.191396   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:29.191451   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:29.232112   61989 cri.go:89] found id: ""
	I0924 01:08:29.232141   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.232152   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:29.232159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:29.232214   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:29.266082   61989 cri.go:89] found id: ""
	I0924 01:08:29.266104   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.266112   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:29.266118   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:29.266178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:29.299777   61989 cri.go:89] found id: ""
	I0924 01:08:29.299802   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.299812   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:29.299817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:29.299883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:29.342709   61989 cri.go:89] found id: ""
	I0924 01:08:29.342740   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.342749   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:29.342756   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:29.342816   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:29.381255   61989 cri.go:89] found id: ""
	I0924 01:08:29.381303   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.381312   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:29.381318   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:29.381375   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:29.414998   61989 cri.go:89] found id: ""
	I0924 01:08:29.415028   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.415036   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:29.415043   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:29.415101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:29.448553   61989 cri.go:89] found id: ""
	I0924 01:08:29.448580   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.448589   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:29.448598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:29.448608   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:29.534936   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:29.535001   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.573554   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:29.573584   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:29.623590   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:29.623626   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:29.636141   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:29.636167   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:29.700591   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.201184   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:32.215034   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:32.215102   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:32.250990   61989 cri.go:89] found id: ""
	I0924 01:08:32.251016   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.251026   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:32.251033   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:32.251104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:32.284448   61989 cri.go:89] found id: ""
	I0924 01:08:32.284483   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.284494   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:32.284504   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:32.284570   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:32.317979   61989 cri.go:89] found id: ""
	I0924 01:08:32.318004   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.318015   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:32.318022   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:32.318078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:32.352057   61989 cri.go:89] found id: ""
	I0924 01:08:32.352082   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.352093   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:32.352101   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:32.352163   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:32.385459   61989 cri.go:89] found id: ""
	I0924 01:08:32.385482   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.385490   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:32.385496   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:32.385544   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:32.421189   61989 cri.go:89] found id: ""
	I0924 01:08:32.421217   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.421227   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:32.421235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:32.421307   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:32.464375   61989 cri.go:89] found id: ""
	I0924 01:08:32.464399   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.464406   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:32.464412   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:32.464457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:32.512716   61989 cri.go:89] found id: ""
	I0924 01:08:32.512742   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.512753   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:32.512763   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:32.512788   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:32.598271   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.598293   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:32.598305   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:32.674197   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:32.674233   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:32.715065   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:32.715092   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:32.767522   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:32.767565   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.281678   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:35.296302   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:35.296390   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:35.336341   61989 cri.go:89] found id: ""
	I0924 01:08:35.336370   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.336381   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:35.336397   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:35.336454   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:35.373090   61989 cri.go:89] found id: ""
	I0924 01:08:35.373118   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.373127   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:35.373135   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:35.373201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:35.413628   61989 cri.go:89] found id: ""
	I0924 01:08:35.413660   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.413668   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:35.413674   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:35.413720   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:35.446564   61989 cri.go:89] found id: ""
	I0924 01:08:35.446592   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.446603   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:35.446610   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:35.446669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:35.478389   61989 cri.go:89] found id: ""
	I0924 01:08:35.478424   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.478435   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:35.478444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:35.478515   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:35.513992   61989 cri.go:89] found id: ""
	I0924 01:08:35.514015   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.514023   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:35.514029   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:35.514085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:35.556442   61989 cri.go:89] found id: ""
	I0924 01:08:35.556471   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.556481   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:35.556489   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:35.556571   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:35.594205   61989 cri.go:89] found id: ""
	I0924 01:08:35.594228   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.594236   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:35.594244   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:35.594254   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:35.637601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:35.637640   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:35.691674   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:35.691711   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.705223   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:35.705261   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:35.784000   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:35.784021   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:35.784036   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.370232   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:38.383287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:38.383358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:38.417528   61989 cri.go:89] found id: ""
	I0924 01:08:38.417556   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.417564   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:38.417571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:38.417619   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:38.459788   61989 cri.go:89] found id: ""
	I0924 01:08:38.459814   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.459821   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:38.459828   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:38.459883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:38.494017   61989 cri.go:89] found id: ""
	I0924 01:08:38.494050   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.494059   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:38.494065   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:38.494135   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:38.526894   61989 cri.go:89] found id: ""
	I0924 01:08:38.526924   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.526935   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:38.526942   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:38.527000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:38.563831   61989 cri.go:89] found id: ""
	I0924 01:08:38.563859   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.563876   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:38.563884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:38.563950   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:38.596066   61989 cri.go:89] found id: ""
	I0924 01:08:38.596095   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.596106   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:38.596114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:38.596172   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:38.630123   61989 cri.go:89] found id: ""
	I0924 01:08:38.630147   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.630157   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:38.630165   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:38.630223   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:38.664714   61989 cri.go:89] found id: ""
	I0924 01:08:38.664743   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.664754   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:38.664765   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:38.664782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:38.718770   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:38.718802   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:38.732878   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:38.732906   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:38.806441   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:38.806469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:38.806485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.884416   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:38.884456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.423782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:41.436827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:41.436899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:41.468283   61989 cri.go:89] found id: ""
	I0924 01:08:41.468316   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.468342   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:41.468353   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:41.468412   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:41.504348   61989 cri.go:89] found id: ""
	I0924 01:08:41.504380   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.504402   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:41.504410   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:41.504470   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:41.544785   61989 cri.go:89] found id: ""
	I0924 01:08:41.544809   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.544818   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:41.544825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:41.544883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:41.582924   61989 cri.go:89] found id: ""
	I0924 01:08:41.582954   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.582965   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:41.582973   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:41.583037   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:41.618220   61989 cri.go:89] found id: ""
	I0924 01:08:41.618243   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.618253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:41.618260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:41.618329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:41.653369   61989 cri.go:89] found id: ""
	I0924 01:08:41.653392   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.653400   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:41.653416   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:41.653477   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:41.687036   61989 cri.go:89] found id: ""
	I0924 01:08:41.687058   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.687069   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:41.687077   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:41.687133   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:41.720701   61989 cri.go:89] found id: ""
	I0924 01:08:41.720732   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.720744   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:41.720756   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:41.720776   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:41.798436   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:41.798486   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.842639   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:41.842674   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:41.893053   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:41.893086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:41.907757   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:41.907784   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:41.973466   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.474071   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:44.487057   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:44.487119   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:44.521772   61989 cri.go:89] found id: ""
	I0924 01:08:44.521813   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.521835   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:44.521843   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:44.521905   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:44.554928   61989 cri.go:89] found id: ""
	I0924 01:08:44.554956   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.554966   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:44.554977   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:44.555042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:44.594246   61989 cri.go:89] found id: ""
	I0924 01:08:44.594279   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.594292   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:44.594298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:44.594344   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:44.629779   61989 cri.go:89] found id: ""
	I0924 01:08:44.629807   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.629819   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:44.629827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:44.629884   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:44.671671   61989 cri.go:89] found id: ""
	I0924 01:08:44.671694   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.671701   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:44.671707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:44.671772   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:44.710875   61989 cri.go:89] found id: ""
	I0924 01:08:44.710910   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.710922   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:44.710931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:44.711000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:44.744345   61989 cri.go:89] found id: ""
	I0924 01:08:44.744381   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.744389   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:44.744395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:44.744442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:44.780771   61989 cri.go:89] found id: ""
	I0924 01:08:44.780797   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.780804   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:44.780812   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:44.780824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:44.834902   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:44.834958   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:44.848503   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:44.848540   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:44.923117   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.923138   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:44.923150   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:45.003806   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:45.003840   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:47.541843   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:47.555428   61989 kubeadm.go:597] duration metric: took 4m2.297219084s to restartPrimaryControlPlane
	W0924 01:08:47.555528   61989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:47.555560   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:49.123410   61989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.567825503s)
	I0924 01:08:49.123501   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:49.142686   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:49.154484   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:49.166734   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:49.166759   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:49.166813   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:49.178374   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:49.178517   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:49.188871   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:49.200190   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:49.200258   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:49.212895   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.225205   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:49.225276   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.237828   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:49.249686   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:49.249751   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:49.262505   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:49.338624   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:08:49.338712   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:49.509271   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:49.509489   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:49.509636   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:08:49.724434   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:49.726458   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:49.726563   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:49.726639   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:49.726737   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:49.726812   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:49.727078   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:49.727375   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:49.728123   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:49.729254   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:49.730178   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:49.732548   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:49.732604   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:49.732676   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:49.938623   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:50.774207   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:51.022535   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:51.148690   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:51.168786   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:51.170070   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:51.170150   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:51.342671   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:51.344458   61989 out.go:235]   - Booting up control plane ...
	I0924 01:08:51.344607   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:51.353468   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:51.356949   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:51.358082   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:51.364468   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:09:31.365725   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:09:31.366444   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:31.366704   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:36.367209   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:36.367654   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:46.367945   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:46.368128   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:06.368912   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:06.369182   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371109   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:46.371309   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371318   61989 kubeadm.go:310] 
	I0924 01:10:46.371352   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:10:46.371455   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:10:46.371490   61989 kubeadm.go:310] 
	I0924 01:10:46.371546   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:10:46.371592   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:10:46.371734   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:10:46.371751   61989 kubeadm.go:310] 
	I0924 01:10:46.371888   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:10:46.371936   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:10:46.371978   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:10:46.371988   61989 kubeadm.go:310] 
	I0924 01:10:46.372124   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:10:46.372253   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:10:46.372262   61989 kubeadm.go:310] 
	I0924 01:10:46.372442   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:10:46.372578   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:10:46.372680   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:10:46.372756   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:10:46.372765   61989 kubeadm.go:310] 
	I0924 01:10:46.373578   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:46.373675   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:10:46.373790   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 01:10:46.373938   61989 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 01:10:46.373987   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:46.834432   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:46.851214   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:46.862648   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:46.862675   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:46.862733   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:46.873005   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:46.873073   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:46.884007   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:46.893944   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:46.894016   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:46.905036   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.914953   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:46.915024   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.924881   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:46.935132   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:46.935192   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:46.945837   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:47.018713   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:10:47.018861   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:47.159920   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:47.160042   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:47.160168   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:10:47.349360   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:47.351645   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:47.351763   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:47.351918   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:47.352040   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:47.352118   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:47.352205   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:47.352298   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:47.352401   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:47.352481   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:47.352574   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:47.352662   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:47.352705   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:47.352767   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:47.467301   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:47.622085   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:47.726807   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:47.951249   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:47.973392   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:47.974396   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:47.974440   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:48.127629   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:48.129312   61989 out.go:235]   - Booting up control plane ...
	I0924 01:10:48.129446   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:48.139821   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:48.143120   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:48.144038   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:48.146275   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:11:28.148929   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:11:28.149086   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:28.149360   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:33.150102   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:33.150283   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:43.151281   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:43.151540   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:03.152338   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:03.152562   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151221   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:43.151503   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151532   61989 kubeadm.go:310] 
	I0924 01:12:43.151585   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:12:43.151645   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:12:43.151655   61989 kubeadm.go:310] 
	I0924 01:12:43.151729   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:12:43.151779   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:12:43.151940   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:12:43.151954   61989 kubeadm.go:310] 
	I0924 01:12:43.152095   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:12:43.152154   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:12:43.152201   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:12:43.152207   61989 kubeadm.go:310] 
	I0924 01:12:43.152294   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:12:43.152411   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:12:43.152424   61989 kubeadm.go:310] 
	I0924 01:12:43.152565   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:12:43.152653   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:12:43.152718   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:12:43.152794   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:12:43.152802   61989 kubeadm.go:310] 
	I0924 01:12:43.153600   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:12:43.153699   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:12:43.153757   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 01:12:43.153808   61989 kubeadm.go:394] duration metric: took 7m57.944266289s to StartCluster
	I0924 01:12:43.153845   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:12:43.153894   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:12:43.199866   61989 cri.go:89] found id: ""
	I0924 01:12:43.199896   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.199908   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:12:43.199916   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:12:43.199975   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:12:43.235387   61989 cri.go:89] found id: ""
	I0924 01:12:43.235420   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.235432   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:12:43.235441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:12:43.235513   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:12:43.271255   61989 cri.go:89] found id: ""
	I0924 01:12:43.271290   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.271303   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:12:43.271312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:12:43.271380   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:12:43.305842   61989 cri.go:89] found id: ""
	I0924 01:12:43.305870   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.305882   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:12:43.305891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:12:43.305952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:12:43.341956   61989 cri.go:89] found id: ""
	I0924 01:12:43.341983   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.342005   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:12:43.342013   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:12:43.342093   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:12:43.376362   61989 cri.go:89] found id: ""
	I0924 01:12:43.376399   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.376421   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:12:43.376431   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:12:43.376487   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:12:43.409351   61989 cri.go:89] found id: ""
	I0924 01:12:43.409378   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.409387   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:12:43.409392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:12:43.409459   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:12:43.442446   61989 cri.go:89] found id: ""
	I0924 01:12:43.442479   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.442487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:12:43.442497   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:12:43.442507   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:12:43.498980   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:12:43.499020   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:12:43.520090   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:12:43.520120   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:12:43.612212   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:12:43.612242   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:12:43.612255   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:12:43.727355   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:12:43.727395   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 01:12:43.770163   61989 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 01:12:43.770217   61989 out.go:270] * 
	* 
	W0924 01:12:43.770282   61989 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.770297   61989 out.go:270] * 
	* 
	W0924 01:12:43.771298   61989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:12:43.775708   61989 out.go:201] 
	W0924 01:12:43.777139   61989 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.777186   61989 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 01:12:43.777214   61989 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 01:12:43.779580   61989 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-171598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (237.897809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-171598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-171598 logs -n 25: (1.692487345s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-075175                              | stopped-upgrade-075175       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-619300                           | kubernetes-upgrade-619300    | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:00:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:00:40.983605   61989 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:00:40.983716   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983722   61989 out.go:358] Setting ErrFile to fd 2...
	I0924 01:00:40.983728   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983918   61989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:00:40.984500   61989 out.go:352] Setting JSON to false
	I0924 01:00:40.985412   61989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6185,"bootTime":1727133456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:00:40.985513   61989 start.go:139] virtualization: kvm guest
	I0924 01:00:40.987848   61989 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:00:40.989366   61989 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:00:40.989467   61989 notify.go:220] Checking for updates...
	I0924 01:00:40.992462   61989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:00:40.994144   61989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:00:40.995782   61989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:00:40.997503   61989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:00:40.999038   61989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:00:41.000959   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:00:41.001315   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.001388   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.017304   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0924 01:00:41.017751   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.018320   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.018355   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.018708   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.018964   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.021075   61989 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:00:41.022764   61989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:00:41.023156   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.023204   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.038764   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0924 01:00:41.039238   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.039828   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.039856   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.040272   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.040569   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.078622   61989 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:00:41.079930   61989 start.go:297] selected driver: kvm2
	I0924 01:00:41.079945   61989 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.080076   61989 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:00:41.080841   61989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.080927   61989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:00:41.096851   61989 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:00:41.097306   61989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:00:41.097345   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:00:41.097410   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:00:41.097465   61989 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.097610   61989 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.099797   61989 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 01:00:39.376584   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:41.101644   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:00:41.101691   61989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 01:00:41.101704   61989 cache.go:56] Caching tarball of preloaded images
	I0924 01:00:41.101801   61989 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:00:41.101816   61989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 01:00:41.101922   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:00:41.102126   61989 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:00:45.456606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:48.528618   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:54.608639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:57.680645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:03.760641   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:06.832676   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:12.912635   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:15.984629   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:22.064669   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:25.136609   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:31.216643   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:34.288667   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:40.368636   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:43.440700   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:49.520634   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:52.592658   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:58.672637   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:01.744679   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:07.824597   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:10.896693   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:16.976656   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:20.048675   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:26.128638   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:29.200595   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:35.280645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:38.352665   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:44.432606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:47.504721   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:53.584645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:56.656617   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:02.736686   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:05.808671   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:11.888586   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:14.960688   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:21.040639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:24.112705   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:30.192631   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:33.264655   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:36.269218   61323 start.go:364] duration metric: took 4m25.932369998s to acquireMachinesLock for "embed-certs-650507"
	I0924 01:03:36.269290   61323 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:36.269298   61323 fix.go:54] fixHost starting: 
	I0924 01:03:36.269661   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:36.269714   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:36.285429   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0924 01:03:36.285943   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:36.286516   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:03:36.286557   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:36.286885   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:36.287078   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:36.287213   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:03:36.288895   61323 fix.go:112] recreateIfNeeded on embed-certs-650507: state=Stopped err=<nil>
	I0924 01:03:36.288917   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	W0924 01:03:36.289113   61323 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:36.291435   61323 out.go:177] * Restarting existing kvm2 VM for "embed-certs-650507" ...
	I0924 01:03:36.266390   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:36.266435   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.266788   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:03:36.266816   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.267022   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:03:36.269105   61070 machine.go:96] duration metric: took 4m37.426687547s to provisionDockerMachine
	I0924 01:03:36.269142   61070 fix.go:56] duration metric: took 4m37.448766856s for fixHost
	I0924 01:03:36.269148   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 4m37.448847609s
	W0924 01:03:36.269167   61070 start.go:714] error starting host: provision: host is not running
	W0924 01:03:36.269264   61070 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 01:03:36.269274   61070 start.go:729] Will try again in 5 seconds ...
	I0924 01:03:36.293006   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Start
	I0924 01:03:36.293199   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring networks are active...
	I0924 01:03:36.294032   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network default is active
	I0924 01:03:36.294359   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network mk-embed-certs-650507 is active
	I0924 01:03:36.294718   61323 main.go:141] libmachine: (embed-certs-650507) Getting domain xml...
	I0924 01:03:36.295407   61323 main.go:141] libmachine: (embed-certs-650507) Creating domain...
	I0924 01:03:37.516049   61323 main.go:141] libmachine: (embed-certs-650507) Waiting to get IP...
	I0924 01:03:37.516959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.517374   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.517443   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.517352   62594 retry.go:31] will retry after 278.072635ms: waiting for machine to come up
	I0924 01:03:37.796796   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.797276   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.797301   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.797242   62594 retry.go:31] will retry after 387.413297ms: waiting for machine to come up
	I0924 01:03:38.185869   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.186239   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.186258   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.186193   62594 retry.go:31] will retry after 363.798568ms: waiting for machine to come up
	I0924 01:03:38.551772   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.552181   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.552221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.552122   62594 retry.go:31] will retry after 392.798012ms: waiting for machine to come up
	I0924 01:03:38.946523   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.947069   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.947097   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.947018   62594 retry.go:31] will retry after 541.413772ms: waiting for machine to come up
	I0924 01:03:39.489873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:39.490278   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:39.490307   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:39.490226   62594 retry.go:31] will retry after 804.62107ms: waiting for machine to come up
	I0924 01:03:41.271024   61070 start.go:360] acquireMachinesLock for no-preload-674057: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:03:40.296290   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:40.296775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:40.296806   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:40.296726   62594 retry.go:31] will retry after 882.018637ms: waiting for machine to come up
	I0924 01:03:41.180799   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:41.181242   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:41.181263   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:41.181197   62594 retry.go:31] will retry after 961.194045ms: waiting for machine to come up
	I0924 01:03:42.143878   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:42.144354   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:42.144379   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:42.144270   62594 retry.go:31] will retry after 1.647837023s: waiting for machine to come up
	I0924 01:03:43.793458   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:43.793892   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:43.793933   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:43.793873   62594 retry.go:31] will retry after 1.751902059s: waiting for machine to come up
	I0924 01:03:45.547905   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:45.548356   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:45.548388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:45.548313   62594 retry.go:31] will retry after 2.380106471s: waiting for machine to come up
	I0924 01:03:47.931021   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:47.931513   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:47.931537   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:47.931456   62594 retry.go:31] will retry after 2.395516641s: waiting for machine to come up
	I0924 01:03:50.328214   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:50.328766   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:50.328791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:50.328729   62594 retry.go:31] will retry after 4.41219579s: waiting for machine to come up
	I0924 01:03:54.745159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745572   61323 main.go:141] libmachine: (embed-certs-650507) Found IP for machine: 192.168.39.104
	I0924 01:03:54.745606   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has current primary IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745615   61323 main.go:141] libmachine: (embed-certs-650507) Reserving static IP address...
	I0924 01:03:54.746020   61323 main.go:141] libmachine: (embed-certs-650507) Reserved static IP address: 192.168.39.104
	I0924 01:03:54.746042   61323 main.go:141] libmachine: (embed-certs-650507) Waiting for SSH to be available...
	I0924 01:03:54.746067   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.746134   61323 main.go:141] libmachine: (embed-certs-650507) DBG | skip adding static IP to network mk-embed-certs-650507 - found existing host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"}
	I0924 01:03:54.746159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Getting to WaitForSSH function...
	I0924 01:03:54.748464   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.748871   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.748906   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.749083   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH client type: external
	I0924 01:03:54.749118   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa (-rw-------)
	I0924 01:03:54.749153   61323 main.go:141] libmachine: (embed-certs-650507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:03:54.749165   61323 main.go:141] libmachine: (embed-certs-650507) DBG | About to run SSH command:
	I0924 01:03:54.749177   61323 main.go:141] libmachine: (embed-certs-650507) DBG | exit 0
	I0924 01:03:54.872532   61323 main.go:141] libmachine: (embed-certs-650507) DBG | SSH cmd err, output: <nil>: 
	I0924 01:03:54.872869   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetConfigRaw
	I0924 01:03:54.873480   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:54.876545   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.876922   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.876953   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.877204   61323 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/config.json ...
	I0924 01:03:54.877443   61323 machine.go:93] provisionDockerMachine start ...
	I0924 01:03:54.877467   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:54.877683   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.879873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880200   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.880221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880375   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.880546   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880681   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880866   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.881002   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.881194   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.881207   61323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:03:54.984605   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:03:54.984636   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.984922   61323 buildroot.go:166] provisioning hostname "embed-certs-650507"
	I0924 01:03:54.984948   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.985185   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.988284   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988699   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.988725   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988857   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.989069   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989344   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989529   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.989731   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.989899   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.989913   61323 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650507 && echo "embed-certs-650507" | sudo tee /etc/hostname
	I0924 01:03:55.106214   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650507
	
	I0924 01:03:55.106273   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.109000   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109310   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.109334   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109498   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.109646   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109989   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.110123   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.110303   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.110318   61323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:03:55.220699   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:55.220738   61323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:03:55.220755   61323 buildroot.go:174] setting up certificates
	I0924 01:03:55.220763   61323 provision.go:84] configureAuth start
	I0924 01:03:55.220771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:55.221112   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.224166   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224603   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.224634   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.226847   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227167   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.227194   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227308   61323 provision.go:143] copyHostCerts
	I0924 01:03:55.227386   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:03:55.227409   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:03:55.227490   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:03:55.227641   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:03:55.227653   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:03:55.227695   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:03:55.227781   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:03:55.227791   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:03:55.227826   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:03:55.227909   61323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650507 san=[127.0.0.1 192.168.39.104 embed-certs-650507 localhost minikube]
	I0924 01:03:55.917061   61699 start.go:364] duration metric: took 3m46.693519233s to acquireMachinesLock for "default-k8s-diff-port-465341"
	I0924 01:03:55.917135   61699 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:55.917144   61699 fix.go:54] fixHost starting: 
	I0924 01:03:55.917553   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:55.917606   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:55.937566   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0924 01:03:55.937971   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:55.938529   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:03:55.938556   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:55.938923   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:55.939182   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:03:55.939365   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:03:55.941155   61699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-465341: state=Stopped err=<nil>
	I0924 01:03:55.941197   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	W0924 01:03:55.941417   61699 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:55.943640   61699 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-465341" ...
	I0924 01:03:55.309866   61323 provision.go:177] copyRemoteCerts
	I0924 01:03:55.309928   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:03:55.309955   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.312946   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313365   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.313388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313638   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.313889   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.314062   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.314206   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.394427   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:03:55.420595   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 01:03:55.444377   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:03:55.467261   61323 provision.go:87] duration metric: took 246.485242ms to configureAuth
	I0924 01:03:55.467302   61323 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:03:55.467483   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:03:55.467552   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.470146   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470539   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.470572   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470719   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.470961   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471101   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471299   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.471450   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.471653   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.471676   61323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:03:55.688189   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:03:55.688218   61323 machine.go:96] duration metric: took 810.761675ms to provisionDockerMachine
	I0924 01:03:55.688230   61323 start.go:293] postStartSetup for "embed-certs-650507" (driver="kvm2")
	I0924 01:03:55.688244   61323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:03:55.688266   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.688659   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:03:55.688690   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.691375   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691761   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.691791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691881   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.692105   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.692309   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.692453   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.775412   61323 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:03:55.779423   61323 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:03:55.779448   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:03:55.779536   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:03:55.779629   61323 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:03:55.779742   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:03:55.788717   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:03:55.811673   61323 start.go:296] duration metric: took 123.428914ms for postStartSetup
	I0924 01:03:55.811717   61323 fix.go:56] duration metric: took 19.542419045s for fixHost
	I0924 01:03:55.811743   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.814745   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815034   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.815062   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815247   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.815449   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815851   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.816012   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.816168   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.816178   61323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:03:55.916845   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139835.894204557
	
	I0924 01:03:55.916883   61323 fix.go:216] guest clock: 1727139835.894204557
	I0924 01:03:55.916896   61323 fix.go:229] Guest: 2024-09-24 01:03:55.894204557 +0000 UTC Remote: 2024-09-24 01:03:55.811721448 +0000 UTC m=+285.612741728 (delta=82.483109ms)
	I0924 01:03:55.916935   61323 fix.go:200] guest clock delta is within tolerance: 82.483109ms
	I0924 01:03:55.916945   61323 start.go:83] releasing machines lock for "embed-certs-650507", held for 19.6476761s
	I0924 01:03:55.916990   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.917314   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.920105   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920550   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.920583   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920832   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921327   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921510   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921578   61323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:03:55.921634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.921747   61323 ssh_runner.go:195] Run: cat /version.json
	I0924 01:03:55.921771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.924238   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924430   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924717   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924741   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924792   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924953   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925061   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925153   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925277   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925360   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925439   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925582   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.925626   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:56.005229   61323 ssh_runner.go:195] Run: systemctl --version
	I0924 01:03:56.046189   61323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:03:56.187701   61323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:03:56.193313   61323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:03:56.193379   61323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:03:56.209278   61323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:03:56.209298   61323 start.go:495] detecting cgroup driver to use...
	I0924 01:03:56.209363   61323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:03:56.226995   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:03:56.241102   61323 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:03:56.241160   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:03:56.255002   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:03:56.269805   61323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:03:56.387382   61323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:03:56.545138   61323 docker.go:233] disabling docker service ...
	I0924 01:03:56.545220   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:03:56.559017   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:03:56.571939   61323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:03:56.694139   61323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:03:56.811253   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:03:56.825480   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:03:56.842777   61323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:03:56.842830   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.852387   61323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:03:56.852447   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.862702   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.872790   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.882864   61323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:03:56.893029   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.903314   61323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.923491   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.933424   61323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:03:56.944496   61323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:03:56.944561   61323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:03:56.957077   61323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:03:56.968602   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:03:57.080955   61323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:03:57.179826   61323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:03:57.179900   61323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:03:57.184652   61323 start.go:563] Will wait 60s for crictl version
	I0924 01:03:57.184716   61323 ssh_runner.go:195] Run: which crictl
	I0924 01:03:57.190300   61323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:03:57.239310   61323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:03:57.239371   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.266833   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.301876   61323 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:03:55.945290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Start
	I0924 01:03:55.945498   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring networks are active...
	I0924 01:03:55.946346   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network default is active
	I0924 01:03:55.946726   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network mk-default-k8s-diff-port-465341 is active
	I0924 01:03:55.947152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Getting domain xml...
	I0924 01:03:55.947872   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Creating domain...
	I0924 01:03:57.236194   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting to get IP...
	I0924 01:03:57.237037   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237445   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237497   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.237413   62713 retry.go:31] will retry after 286.244795ms: waiting for machine to come up
	I0924 01:03:57.525009   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525595   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525621   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.525548   62713 retry.go:31] will retry after 273.807213ms: waiting for machine to come up
	I0924 01:03:57.801217   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801734   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801756   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.801701   62713 retry.go:31] will retry after 371.291567ms: waiting for machine to come up
	I0924 01:03:58.174283   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174746   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174781   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.174692   62713 retry.go:31] will retry after 595.157579ms: waiting for machine to come up
	I0924 01:03:58.771428   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771900   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771925   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.771862   62713 retry.go:31] will retry after 734.305784ms: waiting for machine to come up
	I0924 01:03:57.303135   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:57.306110   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306598   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:57.306624   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306783   61323 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 01:03:57.310829   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:03:57.322605   61323 kubeadm.go:883] updating cluster {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:03:57.322715   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:03:57.322761   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:03:57.358040   61323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:03:57.358104   61323 ssh_runner.go:195] Run: which lz4
	I0924 01:03:57.361948   61323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:03:57.365911   61323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:03:57.365950   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:03:58.651636   61323 crio.go:462] duration metric: took 1.289721413s to copy over tarball
	I0924 01:03:58.651708   61323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:03:59.507803   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508308   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:59.508237   62713 retry.go:31] will retry after 875.394603ms: waiting for machine to come up
	I0924 01:04:00.385279   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385713   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385748   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:00.385655   62713 retry.go:31] will retry after 885.980109ms: waiting for machine to come up
	I0924 01:04:01.273114   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273545   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273590   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:01.273535   62713 retry.go:31] will retry after 935.451975ms: waiting for machine to come up
	I0924 01:04:02.210920   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211423   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:02.211331   62713 retry.go:31] will retry after 1.254573538s: waiting for machine to come up
	I0924 01:04:03.467027   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467593   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467626   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:03.467488   62713 retry.go:31] will retry after 2.044247818s: waiting for machine to come up
	I0924 01:04:00.805580   61323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153837858s)
	I0924 01:04:00.805608   61323 crio.go:469] duration metric: took 2.153947595s to extract the tarball
	I0924 01:04:00.805617   61323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:00.846074   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:00.895803   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:00.895833   61323 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:00.895842   61323 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I0924 01:04:00.895966   61323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-650507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:00.896041   61323 ssh_runner.go:195] Run: crio config
	I0924 01:04:00.941958   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:00.941985   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:00.941998   61323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:00.942029   61323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650507 NodeName:embed-certs-650507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:00.942202   61323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650507"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:00.942292   61323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:00.952748   61323 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:00.952853   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:00.962984   61323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0924 01:04:00.980030   61323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:01.001571   61323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0924 01:04:01.018760   61323 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:01.022770   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:01.034816   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:01.157888   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:01.175883   61323 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507 for IP: 192.168.39.104
	I0924 01:04:01.175911   61323 certs.go:194] generating shared ca certs ...
	I0924 01:04:01.175937   61323 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:01.176134   61323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:01.176198   61323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:01.176211   61323 certs.go:256] generating profile certs ...
	I0924 01:04:01.176324   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/client.key
	I0924 01:04:01.176441   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key.86682f38
	I0924 01:04:01.176515   61323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key
	I0924 01:04:01.176640   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:01.176669   61323 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:01.176678   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:01.176713   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:01.176749   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:01.176778   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:01.176987   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:01.177918   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:01.221682   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:01.266005   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:01.299467   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:01.324598   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 01:04:01.349526   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:01.385589   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:01.409713   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:04:01.433745   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:01.457493   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:01.482197   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:01.505740   61323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:01.524029   61323 ssh_runner.go:195] Run: openssl version
	I0924 01:04:01.530147   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:01.541117   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545823   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545894   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.551638   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:01.562373   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:01.573502   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578561   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578634   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.584415   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:01.595312   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:01.606503   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611530   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611602   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.618484   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:01.629332   61323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:01.634238   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:01.640266   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:01.646306   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:01.652510   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:01.658237   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:01.663962   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:01.669998   61323 kubeadm.go:392] StartCluster: {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:01.670105   61323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:01.670162   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.706478   61323 cri.go:89] found id: ""
	I0924 01:04:01.706555   61323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:01.717106   61323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:01.717127   61323 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:01.717188   61323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:01.729966   61323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:01.730947   61323 kubeconfig.go:125] found "embed-certs-650507" server: "https://192.168.39.104:8443"
	I0924 01:04:01.732933   61323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:01.745538   61323 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.104
	I0924 01:04:01.745581   61323 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:01.745594   61323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:01.745649   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.783313   61323 cri.go:89] found id: ""
	I0924 01:04:01.783423   61323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:01.801432   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:01.811282   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:01.811308   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:01.811371   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:01.820717   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:01.820780   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:01.830289   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:01.839383   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:01.839449   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:01.848920   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.857986   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:01.858045   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.867465   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:01.876598   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:01.876680   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:01.886122   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:01.896245   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:02.004839   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.077983   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073104284s)
	I0924 01:04:03.078020   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.295254   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.369968   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.458283   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:03.458383   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:03.958648   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.459039   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.958614   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.994450   61323 api_server.go:72] duration metric: took 1.536167442s to wait for apiserver process to appear ...
	I0924 01:04:04.994485   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:04.994530   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:04.995139   61323 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0924 01:04:05.513732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514247   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514275   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:05.514201   62713 retry.go:31] will retry after 2.814717647s: waiting for machine to come up
	I0924 01:04:08.331550   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331964   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:08.331932   62713 retry.go:31] will retry after 2.942261445s: waiting for machine to come up
	I0924 01:04:05.495090   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:07.946057   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:07.946116   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:07.946135   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.018665   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.018711   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.018729   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.027105   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.027144   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.494630   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.500471   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.500494   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.995055   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.017236   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:09.017272   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:09.494769   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.500285   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:04:09.507440   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:09.507470   61323 api_server.go:131] duration metric: took 4.512953508s to wait for apiserver health ...
	I0924 01:04:09.507478   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:09.507485   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:09.509661   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:09.511104   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:09.529080   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:09.567695   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:09.579425   61323 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:09.579470   61323 system_pods.go:61] "coredns-7c65d6cfc9-xgs6g" [b975196f-e9e6-4e30-a49b-8d3031f73a21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:09.579489   61323 system_pods.go:61] "etcd-embed-certs-650507" [c24d7e21-08a8-42bd-9def-1808d8a58e07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:09.579501   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f1de6ed5-a87f-4d1d-8feb-d0f80851b5b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:09.579509   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [d0d454bf-b9d3-4dcb-957c-f1329e4e9e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:09.579516   61323 system_pods.go:61] "kube-proxy-qd4lg" [f06c009f-3c62-4e54-82fd-ca468fb05bbc] Running
	I0924 01:04:09.579523   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [e4931370-821e-4289-9b2b-9b46d9f8394e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:09.579532   61323 system_pods.go:61] "metrics-server-6867b74b74-pc28v" [688d7bbe-9fee-450f-aecf-bbb3413a3633] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:09.579536   61323 system_pods.go:61] "storage-provisioner" [9e354a3c-e4f1-46e1-b5fb-de8243f41c29] Running
	I0924 01:04:09.579542   61323 system_pods.go:74] duration metric: took 11.824796ms to wait for pod list to return data ...
	I0924 01:04:09.579550   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:09.584175   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:09.584203   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:09.584214   61323 node_conditions.go:105] duration metric: took 4.659859ms to run NodePressure ...
	I0924 01:04:09.584230   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:09.847130   61323 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:09.851985   61323 kubeadm.go:739] kubelet initialised
	I0924 01:04:09.852008   61323 kubeadm.go:740] duration metric: took 4.853319ms waiting for restarted kubelet to initialise ...
	I0924 01:04:09.852015   61323 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:09.857149   61323 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:11.275680   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276135   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276166   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:11.276102   62713 retry.go:31] will retry after 3.599939746s: waiting for machine to come up
	I0924 01:04:11.865712   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:13.864779   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:13.864801   61323 pod_ready.go:82] duration metric: took 4.007625744s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:13.864809   61323 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.233175   61989 start.go:364] duration metric: took 3m35.131018203s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 01:04:16.233254   61989 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:16.233262   61989 fix.go:54] fixHost starting: 
	I0924 01:04:16.233733   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:16.233787   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:16.255690   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0924 01:04:16.256135   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:16.256729   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:04:16.256763   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:16.257122   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:16.257365   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:16.257560   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 01:04:16.259055   61989 fix.go:112] recreateIfNeeded on old-k8s-version-171598: state=Stopped err=<nil>
	I0924 01:04:16.259091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	W0924 01:04:16.259266   61989 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:16.261327   61989 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	I0924 01:04:14.879977   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880533   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has current primary IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880563   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Found IP for machine: 192.168.61.186
	I0924 01:04:14.880596   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserving static IP address...
	I0924 01:04:14.881148   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.881171   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | skip adding static IP to network mk-default-k8s-diff-port-465341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"}
	I0924 01:04:14.881188   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserved static IP address: 192.168.61.186
	I0924 01:04:14.881216   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for SSH to be available...
	I0924 01:04:14.881229   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Getting to WaitForSSH function...
	I0924 01:04:14.883679   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884060   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.884083   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884214   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH client type: external
	I0924 01:04:14.884248   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa (-rw-------)
	I0924 01:04:14.884276   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:14.884287   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | About to run SSH command:
	I0924 01:04:14.884298   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | exit 0
	I0924 01:04:15.012764   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:15.013163   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetConfigRaw
	I0924 01:04:15.013983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.016664   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017173   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.017207   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017440   61699 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/config.json ...
	I0924 01:04:15.017668   61699 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:15.017687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.017915   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.020388   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.020816   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.020839   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.021074   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.021249   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021513   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021681   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.021850   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.022031   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.022041   61699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:15.132672   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:15.132706   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.132994   61699 buildroot.go:166] provisioning hostname "default-k8s-diff-port-465341"
	I0924 01:04:15.133025   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.133268   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.135929   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136371   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.136399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136578   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.136850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137008   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137193   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.137407   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.137589   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.137610   61699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-465341 && echo "default-k8s-diff-port-465341" | sudo tee /etc/hostname
	I0924 01:04:15.262142   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-465341
	
	I0924 01:04:15.262174   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.265359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265736   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.265761   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265962   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.266176   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266335   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266510   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.266705   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.266903   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.266926   61699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-465341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-465341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-465341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:15.385085   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:15.385122   61699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:15.385158   61699 buildroot.go:174] setting up certificates
	I0924 01:04:15.385174   61699 provision.go:84] configureAuth start
	I0924 01:04:15.385186   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.385556   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.388350   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388798   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.388828   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388985   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.391478   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391793   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.391823   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391952   61699 provision.go:143] copyHostCerts
	I0924 01:04:15.392016   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:15.392045   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:15.392115   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:15.392259   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:15.392272   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:15.392306   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:15.392406   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:15.392415   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:15.392440   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:15.392503   61699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-465341 san=[127.0.0.1 192.168.61.186 default-k8s-diff-port-465341 localhost minikube]
	I0924 01:04:15.572588   61699 provision.go:177] copyRemoteCerts
	I0924 01:04:15.572682   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:15.572718   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.575884   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.576401   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.576868   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.577099   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.577248   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:15.662231   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:15.686800   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 01:04:15.709860   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:04:15.738063   61699 provision.go:87] duration metric: took 352.876914ms to configureAuth
	I0924 01:04:15.738105   61699 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:15.738302   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:15.738420   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.741231   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741644   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.741693   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741835   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.742036   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742218   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.742526   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.742727   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.742754   61699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:15.986096   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:15.986128   61699 machine.go:96] duration metric: took 968.446778ms to provisionDockerMachine
	I0924 01:04:15.986143   61699 start.go:293] postStartSetup for "default-k8s-diff-port-465341" (driver="kvm2")
	I0924 01:04:15.986156   61699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:15.986183   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.986639   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:15.986674   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.989692   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990094   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.990124   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990407   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.990643   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.990826   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.990958   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.079174   61699 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:16.083139   61699 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:16.083168   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:16.083251   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:16.083363   61699 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:16.083486   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:16.094571   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:16.117327   61699 start.go:296] duration metric: took 131.16913ms for postStartSetup
	I0924 01:04:16.117364   61699 fix.go:56] duration metric: took 20.200222398s for fixHost
	I0924 01:04:16.117384   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.120507   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.120857   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.120899   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.121059   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.121325   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121511   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.121901   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:16.122100   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:16.122113   61699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:16.232986   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139856.205476339
	
	I0924 01:04:16.233013   61699 fix.go:216] guest clock: 1727139856.205476339
	I0924 01:04:16.233024   61699 fix.go:229] Guest: 2024-09-24 01:04:16.205476339 +0000 UTC Remote: 2024-09-24 01:04:16.117368802 +0000 UTC m=+247.038042336 (delta=88.107537ms)
	I0924 01:04:16.233086   61699 fix.go:200] guest clock delta is within tolerance: 88.107537ms
	I0924 01:04:16.233094   61699 start.go:83] releasing machines lock for "default-k8s-diff-port-465341", held for 20.315992151s
	I0924 01:04:16.233133   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.233491   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:16.236719   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237104   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.237134   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.237850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238019   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238116   61699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:16.238167   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.238227   61699 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:16.238260   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.241123   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241448   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241598   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241916   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.241982   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.242152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242225   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242351   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242479   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242543   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.242880   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.368841   61699 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:16.374990   61699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:16.521604   61699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:16.527198   61699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:16.527290   61699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:16.543251   61699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:16.543278   61699 start.go:495] detecting cgroup driver to use...
	I0924 01:04:16.543357   61699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:16.561775   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:16.576028   61699 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:16.576097   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:16.591757   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:16.607927   61699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:16.753944   61699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:16.917338   61699 docker.go:233] disabling docker service ...
	I0924 01:04:16.917401   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:16.935104   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:16.949717   61699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:17.088275   61699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:17.222093   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:17.236370   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:17.256277   61699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:17.256360   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.266516   61699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:17.266600   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.276647   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.288283   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.299232   61699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:17.311336   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.329416   61699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.351465   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.362248   61699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:17.372102   61699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:17.372154   61699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:17.392055   61699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:17.413641   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:17.541224   61699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:17.655205   61699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:17.655281   61699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:17.660096   61699 start.go:563] Will wait 60s for crictl version
	I0924 01:04:17.660163   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:04:17.663880   61699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:17.706878   61699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:17.706959   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.735377   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.766744   61699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:17.768253   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:17.771534   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.771952   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:17.771983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.772230   61699 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:17.776486   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:17.792599   61699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:17.792744   61699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:17.792813   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:17.831837   61699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:17.831929   61699 ssh_runner.go:195] Run: which lz4
	I0924 01:04:17.836193   61699 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:17.840562   61699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:17.840596   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:04:15.871512   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:15.871540   61323 pod_ready.go:82] duration metric: took 2.006723245s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:15.871552   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879872   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:17.879899   61323 pod_ready.go:82] duration metric: took 2.008337801s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879918   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888007   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.888041   61323 pod_ready.go:82] duration metric: took 2.008114424s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888056   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894805   61323 pod_ready.go:93] pod "kube-proxy-qd4lg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.894844   61323 pod_ready.go:82] duration metric: took 6.779022ms for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894862   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900353   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.900387   61323 pod_ready.go:82] duration metric: took 5.513733ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900401   61323 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.262929   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .Start
	I0924 01:04:16.263123   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 01:04:16.264062   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 01:04:16.264543   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 01:04:16.264954   61989 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 01:04:16.265899   61989 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 01:04:17.566157   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 01:04:17.567223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.567644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.567724   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.567625   62886 retry.go:31] will retry after 301.652575ms: waiting for machine to come up
	I0924 01:04:17.871163   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.871700   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.871729   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.871645   62886 retry.go:31] will retry after 337.632324ms: waiting for machine to come up
	I0924 01:04:18.211081   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.211954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.212013   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.211892   62886 retry.go:31] will retry after 431.70455ms: waiting for machine to come up
	I0924 01:04:18.645408   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.646017   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.646044   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.645958   62886 retry.go:31] will retry after 582.966569ms: waiting for machine to come up
	I0924 01:04:19.230457   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.230954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.230980   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.230897   62886 retry.go:31] will retry after 720.62326ms: waiting for machine to come up
	I0924 01:04:19.953023   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.953570   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.953603   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.953512   62886 retry.go:31] will retry after 688.597177ms: waiting for machine to come up
	I0924 01:04:20.644150   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:20.644636   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:20.644672   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:20.644578   62886 retry.go:31] will retry after 1.084671138s: waiting for machine to come up
	I0924 01:04:19.165501   61699 crio.go:462] duration metric: took 1.329329949s to copy over tarball
	I0924 01:04:19.165575   61699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:21.323478   61699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157877766s)
	I0924 01:04:21.323509   61699 crio.go:469] duration metric: took 2.157979404s to extract the tarball
	I0924 01:04:21.323516   61699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:21.360397   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:21.401282   61699 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:21.401309   61699 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:21.401319   61699 kubeadm.go:934] updating node { 192.168.61.186 8444 v1.31.1 crio true true} ...
	I0924 01:04:21.401441   61699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-465341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:21.401524   61699 ssh_runner.go:195] Run: crio config
	I0924 01:04:21.447706   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:21.447730   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:21.447741   61699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:21.447766   61699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.186 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-465341 NodeName:default-k8s-diff-port-465341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:21.447939   61699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-465341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:21.448022   61699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:21.457882   61699 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:21.457967   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:21.467329   61699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 01:04:21.483464   61699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:21.500880   61699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 01:04:21.517179   61699 ssh_runner.go:195] Run: grep 192.168.61.186	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:21.521032   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:21.532339   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:21.655583   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:21.671964   61699 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341 for IP: 192.168.61.186
	I0924 01:04:21.672019   61699 certs.go:194] generating shared ca certs ...
	I0924 01:04:21.672044   61699 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:21.672273   61699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:21.672390   61699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:21.672409   61699 certs.go:256] generating profile certs ...
	I0924 01:04:21.672536   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.key
	I0924 01:04:21.672629   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key.b6f5ff18
	I0924 01:04:21.672696   61699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key
	I0924 01:04:21.672940   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:21.672987   61699 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:21.672999   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:21.673029   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:21.673060   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:21.673091   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:21.673133   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:21.673884   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:21.706165   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:21.735352   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:21.763358   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:21.786284   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 01:04:21.814844   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:21.839773   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:21.866549   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:21.889901   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:21.914875   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:21.939116   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:21.963264   61699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:21.980912   61699 ssh_runner.go:195] Run: openssl version
	I0924 01:04:21.986725   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:21.998128   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002832   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002903   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.008847   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:22.019274   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:22.030110   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035920   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035996   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.043505   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:22.057224   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:22.067596   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.071957   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.072029   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.077495   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:22.087627   61699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:22.092049   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:22.097908   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:22.103716   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:22.109871   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:22.116088   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:22.121760   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:22.127473   61699 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:22.127563   61699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:22.127613   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.167951   61699 cri.go:89] found id: ""
	I0924 01:04:22.168054   61699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:22.177878   61699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:22.177898   61699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:22.177949   61699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:22.187116   61699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:22.188577   61699 kubeconfig.go:125] found "default-k8s-diff-port-465341" server: "https://192.168.61.186:8444"
	I0924 01:04:22.191744   61699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:22.200936   61699 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.186
	I0924 01:04:22.200967   61699 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:22.200979   61699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:22.201039   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.247804   61699 cri.go:89] found id: ""
	I0924 01:04:22.247888   61699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:22.263853   61699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:22.273254   61699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:22.273271   61699 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:22.273327   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 01:04:22.281724   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:22.281790   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:22.290823   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 01:04:22.299422   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:22.299482   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:22.308961   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.317922   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:22.318010   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.326980   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 01:04:22.335995   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:22.336084   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:22.345002   61699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:22.354302   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:22.462157   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.380163   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.610795   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.679134   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.747119   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:23.747191   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:21.909834   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:24.104163   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:21.730823   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:21.731385   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:21.731411   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:21.731351   62886 retry.go:31] will retry after 1.051424847s: waiting for machine to come up
	I0924 01:04:22.784644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:22.785194   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:22.785223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:22.785138   62886 retry.go:31] will retry after 1.750498954s: waiting for machine to come up
	I0924 01:04:24.537680   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:24.538085   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:24.538109   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:24.538039   62886 retry.go:31] will retry after 2.015183238s: waiting for machine to come up
	I0924 01:04:24.247859   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:24.748076   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.248220   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.747481   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.774137   61699 api_server.go:72] duration metric: took 2.027016323s to wait for apiserver process to appear ...
	I0924 01:04:25.774167   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:25.774194   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:25.774901   61699 api_server.go:269] stopped: https://192.168.61.186:8444/healthz: Get "https://192.168.61.186:8444/healthz": dial tcp 192.168.61.186:8444: connect: connection refused
	I0924 01:04:26.275226   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.290581   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.290621   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.290637   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.321353   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.321386   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.775068   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.779873   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:28.779896   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:26.408349   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:28.409816   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:26.555221   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:26.555674   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:26.555695   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:26.555634   62886 retry.go:31] will retry after 2.568414115s: waiting for machine to come up
	I0924 01:04:29.127625   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:29.128130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:29.128149   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:29.128108   62886 retry.go:31] will retry after 2.207252231s: waiting for machine to come up
	I0924 01:04:29.275326   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.284304   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.284360   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:29.774975   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.779470   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.779503   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.275137   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.279256   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.279287   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.774874   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.779081   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.779110   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.275163   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.279417   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:31.279446   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.775022   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.780092   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:04:31.787643   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:31.787672   61699 api_server.go:131] duration metric: took 6.013498176s to wait for apiserver health ...
	I0924 01:04:31.787680   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:31.787686   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:31.789733   61699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:31.791140   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:31.801441   61699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:31.819890   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:31.828128   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:31.828160   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:31.828168   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:31.828177   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:31.828186   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:31.828191   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:04:31.828196   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:31.828200   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:31.828203   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:04:31.828209   61699 system_pods.go:74] duration metric: took 8.300337ms to wait for pod list to return data ...
	I0924 01:04:31.828215   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:31.831528   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:31.831550   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:31.831561   61699 node_conditions.go:105] duration metric: took 3.341719ms to run NodePressure ...
	I0924 01:04:31.831576   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:32.101590   61699 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105656   61699 kubeadm.go:739] kubelet initialised
	I0924 01:04:32.105679   61699 kubeadm.go:740] duration metric: took 4.062709ms waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105691   61699 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:32.110237   61699 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.115057   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115090   61699 pod_ready.go:82] duration metric: took 4.825694ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.115102   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115110   61699 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.119506   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119534   61699 pod_ready.go:82] duration metric: took 4.415876ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.119546   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119558   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.124199   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124248   61699 pod_ready.go:82] duration metric: took 4.660764ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.124266   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124285   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.223553   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223596   61699 pod_ready.go:82] duration metric: took 99.284751ms for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.223606   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223613   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.622500   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622527   61699 pod_ready.go:82] duration metric: took 398.907418ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.622538   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622545   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.023370   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023430   61699 pod_ready.go:82] duration metric: took 400.874003ms for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.023458   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023472   61699 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.422810   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422841   61699 pod_ready.go:82] duration metric: took 399.35051ms for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.422851   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422859   61699 pod_ready.go:39] duration metric: took 1.317159668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:33.422874   61699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:04:33.434449   61699 ops.go:34] apiserver oom_adj: -16
	I0924 01:04:33.434473   61699 kubeadm.go:597] duration metric: took 11.256568213s to restartPrimaryControlPlane
	I0924 01:04:33.434481   61699 kubeadm.go:394] duration metric: took 11.307014166s to StartCluster
	I0924 01:04:33.434501   61699 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.434571   61699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:33.436172   61699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.436515   61699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:04:33.436732   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:33.436686   61699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:04:33.436809   61699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436815   61699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436830   61699 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-465341"
	I0924 01:04:33.436832   61699 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436864   61699 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.436877   61699 addons.go:243] addon metrics-server should already be in state true
	I0924 01:04:33.436908   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	W0924 01:04:33.436842   61699 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:04:33.436935   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.436831   61699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-465341"
	I0924 01:04:33.437322   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437370   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437377   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437412   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437458   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437483   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.438259   61699 out.go:177] * Verifying Kubernetes components...
	I0924 01:04:33.439923   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:33.453108   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0924 01:04:33.453545   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I0924 01:04:33.453608   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.453916   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.454125   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454152   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454461   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454486   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454494   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.454806   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.455065   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455111   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.455360   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455404   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.456716   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0924 01:04:33.457163   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.457688   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.457727   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.458031   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.458242   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.461814   61699 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.461835   61699 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:04:33.461864   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.462230   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.462273   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.471783   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0924 01:04:33.472043   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0924 01:04:33.472300   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472550   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472858   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.472875   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.472994   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.473003   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.473234   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473366   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473413   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.473503   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.475140   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.475553   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.477287   61699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:04:33.477293   61699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:33.478708   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:04:33.478720   61699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:04:33.478737   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478836   61699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.478863   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:04:33.478889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478971   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0924 01:04:33.479636   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.480029   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.480041   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.480396   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.482306   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.482343   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.483280   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483373   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483769   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483873   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483892   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483958   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484111   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484236   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484255   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484413   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.484472   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484738   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484866   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.519981   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0924 01:04:33.520440   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.520996   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.521028   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.521497   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.521701   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.523331   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.523576   61699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.523591   61699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:04:33.523625   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.526668   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527211   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.527244   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527471   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.527702   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.527889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.528059   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.645903   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:33.663805   61699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:33.749720   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.751631   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:04:33.751649   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:04:33.755330   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.812231   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:04:33.812257   61699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:04:33.847216   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:33.847240   61699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:04:33.932057   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:34.781871   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026510893s)
	I0924 01:04:34.781939   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.781950   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.781887   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032127769s)
	I0924 01:04:34.782009   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782023   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782293   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782309   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782318   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782326   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782361   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782369   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782375   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782389   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782404   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782629   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782643   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782645   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782673   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782683   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.790740   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.790757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.790990   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.791010   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.791013   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.871488   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871516   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.871809   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.871826   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.871834   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871841   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.872103   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.872125   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.872117   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.872136   61699 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-465341"
	I0924 01:04:34.874133   61699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:04:30.907606   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:33.406280   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:31.337368   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:31.338025   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:31.338128   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:31.338011   62886 retry.go:31] will retry after 4.137847727s: waiting for machine to come up
	I0924 01:04:35.478410   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.478991   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.479016   61989 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 01:04:35.479029   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 01:04:35.479586   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.479607   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 01:04:35.479626   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | skip adding static IP to network mk-old-k8s-version-171598 - found existing host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"}
	I0924 01:04:35.479643   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 01:04:35.479659   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 01:04:35.482028   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482377   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.482419   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 01:04:35.482550   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 01:04:35.482585   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:35.482600   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 01:04:35.482614   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 01:04:35.613364   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:35.613847   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 01:04:35.614543   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:35.617366   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.617742   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.617774   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.618068   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:04:35.618260   61989 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:35.618279   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:35.618489   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.621130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621472   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.621497   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621722   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.621914   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622354   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.622558   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.622749   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.622760   61989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:35.736637   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:35.736661   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.736943   61989 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 01:04:35.736973   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.737151   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.739921   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740304   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.740362   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740502   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.740678   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740851   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.741218   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.741409   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.741423   61989 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 01:04:35.866963   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 01:04:35.866994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.870342   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.870860   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.870893   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.871145   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.871406   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871638   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871850   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.872050   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.872253   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.872276   61989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:36.717274   61070 start.go:364] duration metric: took 55.446152288s to acquireMachinesLock for "no-preload-674057"
	I0924 01:04:36.717335   61070 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:36.717344   61070 fix.go:54] fixHost starting: 
	I0924 01:04:36.717781   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:36.717821   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:36.739062   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0924 01:04:36.739602   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:36.740307   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:04:36.740366   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:36.740767   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:36.741058   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:36.741223   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:04:36.743313   61070 fix.go:112] recreateIfNeeded on no-preload-674057: state=Stopped err=<nil>
	I0924 01:04:36.743339   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	W0924 01:04:36.743512   61070 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:36.745694   61070 out.go:177] * Restarting existing kvm2 VM for "no-preload-674057" ...
	I0924 01:04:35.998933   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:35.998962   61989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:35.998983   61989 buildroot.go:174] setting up certificates
	I0924 01:04:35.998994   61989 provision.go:84] configureAuth start
	I0924 01:04:35.999005   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.999359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.002499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003027   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.003052   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003167   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.005508   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005773   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.005796   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005909   61989 provision.go:143] copyHostCerts
	I0924 01:04:36.005967   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:36.005986   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:36.006037   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:36.006129   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:36.006137   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:36.006156   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:36.006209   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:36.006216   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:36.006237   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:36.006310   61989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 01:04:36.084609   61989 provision.go:177] copyRemoteCerts
	I0924 01:04:36.084671   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:36.084698   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.087740   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088046   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.088075   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088278   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.088523   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.088716   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.088854   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.178597   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:36.202768   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:04:36.225933   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:36.250014   61989 provision.go:87] duration metric: took 251.005829ms to configureAuth
	I0924 01:04:36.250046   61989 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:36.250369   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:04:36.250453   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.253290   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.253912   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.253943   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.254242   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.254474   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254650   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254764   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.254958   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.255124   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.255138   61989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:36.472324   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:36.472381   61989 machine.go:96] duration metric: took 854.106776ms to provisionDockerMachine
	I0924 01:04:36.472401   61989 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 01:04:36.472419   61989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:36.472451   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.472814   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:36.472849   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.475567   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.475941   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.475969   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.476125   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.476403   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.476614   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.476831   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.562688   61989 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:36.566476   61989 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:36.566501   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:36.566561   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:36.566635   61989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:36.566724   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:36.576132   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:36.599696   61989 start.go:296] duration metric: took 127.276787ms for postStartSetup
	I0924 01:04:36.599738   61989 fix.go:56] duration metric: took 20.366477202s for fixHost
	I0924 01:04:36.599763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.603462   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.603836   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.603867   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.604057   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.604500   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604721   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604878   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.605041   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.605285   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.605303   61989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:36.717061   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139876.688490589
	
	I0924 01:04:36.717091   61989 fix.go:216] guest clock: 1727139876.688490589
	I0924 01:04:36.717102   61989 fix.go:229] Guest: 2024-09-24 01:04:36.688490589 +0000 UTC Remote: 2024-09-24 01:04:36.599742488 +0000 UTC m=+235.652611441 (delta=88.748101ms)
	I0924 01:04:36.717157   61989 fix.go:200] guest clock delta is within tolerance: 88.748101ms
	I0924 01:04:36.717165   61989 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 20.483937438s
	I0924 01:04:36.717199   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.717499   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.720466   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.720959   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.720986   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.721189   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721965   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.722073   61989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:36.722118   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.722187   61989 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:36.722215   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.725171   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725384   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725669   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.725694   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725858   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.725970   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.726016   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.726065   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726249   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.726254   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.726494   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726513   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.726657   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.727049   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.845385   61989 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:36.853307   61989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:37.001850   61989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:37.009873   61989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:37.009948   61989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:37.032269   61989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:37.032299   61989 start.go:495] detecting cgroup driver to use...
	I0924 01:04:37.032403   61989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:37.056250   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:37.072827   61989 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:37.072903   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:37.090639   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:37.107525   61989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:37.235495   61989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:37.410971   61989 docker.go:233] disabling docker service ...
	I0924 01:04:37.411034   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:37.427815   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:37.444121   61989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:37.568933   61989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:37.700008   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:37.715529   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:37.736908   61989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 01:04:37.736980   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.748540   61989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:37.748590   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.759301   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.771008   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.782080   61989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:37.793756   61989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:37.803444   61989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:37.803525   61989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:37.818012   61989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:37.829019   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:37.978885   61989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:38.086263   61989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:38.086353   61989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:38.093479   61989 start.go:563] Will wait 60s for crictl version
	I0924 01:04:38.093573   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:38.097486   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:38.138781   61989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:38.138872   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.166832   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.199764   61989 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 01:04:36.747491   61070 main.go:141] libmachine: (no-preload-674057) Calling .Start
	I0924 01:04:36.747705   61070 main.go:141] libmachine: (no-preload-674057) Ensuring networks are active...
	I0924 01:04:36.748694   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network default is active
	I0924 01:04:36.749079   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network mk-no-preload-674057 is active
	I0924 01:04:36.749656   61070 main.go:141] libmachine: (no-preload-674057) Getting domain xml...
	I0924 01:04:36.750535   61070 main.go:141] libmachine: (no-preload-674057) Creating domain...
	I0924 01:04:38.122450   61070 main.go:141] libmachine: (no-preload-674057) Waiting to get IP...
	I0924 01:04:38.123578   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.124107   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.124173   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.124079   63121 retry.go:31] will retry after 227.552582ms: waiting for machine to come up
	I0924 01:04:38.353724   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.354145   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.354169   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.354102   63121 retry.go:31] will retry after 322.483933ms: waiting for machine to come up
	I0924 01:04:38.678600   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.679091   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.679120   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.679041   63121 retry.go:31] will retry after 301.71366ms: waiting for machine to come up
	I0924 01:04:34.875511   61699 addons.go:510] duration metric: took 1.43884954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:04:35.671396   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:38.169131   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:35.907681   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.408396   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.201359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:38.204699   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205122   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:38.205152   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205408   61989 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:38.209456   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:38.222128   61989 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:38.222254   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:04:38.222300   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:38.276802   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:38.276864   61989 ssh_runner.go:195] Run: which lz4
	I0924 01:04:38.280989   61989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:38.285108   61989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:38.285138   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 01:04:39.903777   61989 crio.go:462] duration metric: took 1.62282331s to copy over tarball
	I0924 01:04:39.903900   61989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:38.982586   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.983239   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.983283   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.983219   63121 retry.go:31] will retry after 402.217062ms: waiting for machine to come up
	I0924 01:04:39.386903   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:39.387550   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:39.387578   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:39.387483   63121 retry.go:31] will retry after 734.565994ms: waiting for machine to come up
	I0924 01:04:40.123444   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.123910   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.123940   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.123870   63121 retry.go:31] will retry after 704.281941ms: waiting for machine to come up
	I0924 01:04:40.829666   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.830217   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.830275   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.830209   63121 retry.go:31] will retry after 1.068502434s: waiting for machine to come up
	I0924 01:04:41.900192   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:41.900739   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:41.900765   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:41.900691   63121 retry.go:31] will retry after 1.087234201s: waiting for machine to come up
	I0924 01:04:42.989622   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:42.990089   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:42.990117   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:42.990036   63121 retry.go:31] will retry after 1.269273138s: waiting for machine to come up
	I0924 01:04:39.168613   61699 node_ready.go:49] node "default-k8s-diff-port-465341" has status "Ready":"True"
	I0924 01:04:39.168638   61699 node_ready.go:38] duration metric: took 5.504799687s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:39.168650   61699 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:39.175830   61699 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182016   61699 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.182040   61699 pod_ready.go:82] duration metric: took 6.182193ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182052   61699 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188162   61699 pod_ready.go:93] pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.188191   61699 pod_ready.go:82] duration metric: took 6.130794ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188201   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196197   61699 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.196225   61699 pod_ready.go:82] duration metric: took 8.016123ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196238   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703747   61699 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.703776   61699 pod_ready.go:82] duration metric: took 1.507528182s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703791   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771262   61699 pod_ready.go:93] pod "kube-proxy-nf8mp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.771293   61699 pod_ready.go:82] duration metric: took 67.494606ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771307   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:42.778933   61699 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:40.908876   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:43.409650   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:42.944929   61989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040984911s)
	I0924 01:04:42.944969   61989 crio.go:469] duration metric: took 3.041152253s to extract the tarball
	I0924 01:04:42.944981   61989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:42.988315   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:43.036011   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:43.036045   61989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:43.036151   61989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.036194   61989 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 01:04:43.036211   61989 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.036281   61989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.036301   61989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.036344   61989 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.036310   61989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.036577   61989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038440   61989 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.038458   61989 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 01:04:43.038482   61989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.038502   61989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.038554   61989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.038588   61989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.038600   61989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038816   61989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.306768   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.309660   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 01:04:43.312684   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.314551   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.317719   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.326063   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.378736   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.405508   61989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 01:04:43.405585   61989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.405648   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.452908   61989 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 01:04:43.452954   61989 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 01:04:43.453006   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471293   61989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 01:04:43.471341   61989 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 01:04:43.471347   61989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.471370   61989 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.471297   61989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 01:04:43.471406   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471421   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471423   61989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.471462   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.494687   61989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 01:04:43.494735   61989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.494782   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508206   61989 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 01:04:43.508253   61989 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.508278   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.508298   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508363   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.508419   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.508451   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.508487   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.508547   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.645995   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.646039   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.646098   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.646152   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.646261   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.646337   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.646413   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817326   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.817416   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.817381   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.817508   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817449   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.817597   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.817686   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.972782   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.972792   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 01:04:43.972869   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 01:04:43.972838   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 01:04:43.972928   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 01:04:43.972944   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 01:04:43.973027   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 01:04:44.008191   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 01:04:44.220628   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:44.364297   61989 cache_images.go:92] duration metric: took 1.328227964s to LoadCachedImages
	W0924 01:04:44.364505   61989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0924 01:04:44.364539   61989 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 01:04:44.364681   61989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:44.364824   61989 ssh_runner.go:195] Run: crio config
	I0924 01:04:44.423360   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:04:44.423382   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:44.423393   61989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:44.423412   61989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:04:44.423593   61989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:44.423671   61989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:04:44.434069   61989 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:44.434143   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:44.443807   61989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 01:04:44.463473   61989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:44.480449   61989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 01:04:44.498520   61989 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:44.503034   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:44.516699   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:44.643090   61989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:44.660194   61989 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 01:04:44.660216   61989 certs.go:194] generating shared ca certs ...
	I0924 01:04:44.660234   61989 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:44.660454   61989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:44.660542   61989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:44.660559   61989 certs.go:256] generating profile certs ...
	I0924 01:04:44.660682   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 01:04:44.660755   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 01:04:44.660816   61989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 01:04:44.660976   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:44.661014   61989 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:44.661026   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:44.661071   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:44.661104   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:44.661133   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:44.661211   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:44.662130   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:44.710279   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:44.736824   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:44.773120   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:44.801137   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:04:44.844946   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:44.880871   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:44.908630   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:44.947148   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:44.971925   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:45.000519   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:45.034167   61989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:45.054932   61989 ssh_runner.go:195] Run: openssl version
	I0924 01:04:45.062733   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:45.076993   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082104   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082175   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.088219   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:45.099211   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:45.111178   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116551   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116624   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.122353   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:45.133490   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:45.144123   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150437   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150498   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.157127   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:45.168217   61989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:45.172865   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:45.179177   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:45.184987   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:45.190927   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:45.197134   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:45.203170   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:45.209550   61989 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:45.209721   61989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:45.209778   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.247564   61989 cri.go:89] found id: ""
	I0924 01:04:45.247635   61989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:45.258171   61989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:45.258195   61989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:45.258269   61989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:45.268247   61989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:45.269656   61989 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:45.270486   61989 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171598" cluster setting kubeconfig missing "old-k8s-version-171598" context setting]
	I0924 01:04:45.271918   61989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:45.277260   61989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:45.287239   61989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.3
	I0924 01:04:45.287271   61989 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:45.287281   61989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:45.287325   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.327991   61989 cri.go:89] found id: ""
	I0924 01:04:45.328071   61989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:45.344693   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:45.354414   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:45.354439   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:45.354499   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:45.363765   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:45.363838   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:45.373569   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:45.382401   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:45.382464   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:45.392710   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.402855   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:45.402919   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.413651   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:45.423818   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:45.423873   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:45.434138   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:45.444119   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:45.582409   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:44.261681   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:44.262330   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:44.262360   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:44.262274   63121 retry.go:31] will retry after 1.755704993s: waiting for machine to come up
	I0924 01:04:46.019761   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:46.020213   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:46.020242   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:46.020155   63121 retry.go:31] will retry after 2.038509067s: waiting for machine to come up
	I0924 01:04:48.060649   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:48.061170   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:48.061201   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:48.061122   63121 retry.go:31] will retry after 2.834284151s: waiting for machine to come up
	I0924 01:04:45.021172   61699 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:45.021200   61699 pod_ready.go:82] duration metric: took 4.249884358s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:45.021213   61699 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:47.028860   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:45.908530   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:48.407714   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:46.245754   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.511218   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.608877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.722521   61989 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:46.722607   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.222945   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.723437   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.223704   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.723517   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.223744   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.722691   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.222927   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.723331   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.897541   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:50.898047   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:50.898093   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:50.898018   63121 retry.go:31] will retry after 4.166792416s: waiting for machine to come up
	I0924 01:04:49.530215   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.027812   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:50.907425   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.907568   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:54.908623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:51.223525   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.722715   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.223281   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.723378   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.222798   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.722883   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.223279   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.723155   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.222994   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.723628   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.068642   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069305   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has current primary IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069330   61070 main.go:141] libmachine: (no-preload-674057) Found IP for machine: 192.168.50.161
	I0924 01:04:55.069339   61070 main.go:141] libmachine: (no-preload-674057) Reserving static IP address...
	I0924 01:04:55.070035   61070 main.go:141] libmachine: (no-preload-674057) Reserved static IP address: 192.168.50.161
	I0924 01:04:55.070065   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.070073   61070 main.go:141] libmachine: (no-preload-674057) Waiting for SSH to be available...
	I0924 01:04:55.070090   61070 main.go:141] libmachine: (no-preload-674057) DBG | skip adding static IP to network mk-no-preload-674057 - found existing host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"}
	I0924 01:04:55.070095   61070 main.go:141] libmachine: (no-preload-674057) DBG | Getting to WaitForSSH function...
	I0924 01:04:55.072715   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073106   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.073140   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073351   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH client type: external
	I0924 01:04:55.073379   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa (-rw-------)
	I0924 01:04:55.073405   61070 main.go:141] libmachine: (no-preload-674057) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:55.073444   61070 main.go:141] libmachine: (no-preload-674057) DBG | About to run SSH command:
	I0924 01:04:55.073462   61070 main.go:141] libmachine: (no-preload-674057) DBG | exit 0
	I0924 01:04:55.200585   61070 main.go:141] libmachine: (no-preload-674057) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:55.200980   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetConfigRaw
	I0924 01:04:55.201650   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.204919   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205340   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.205360   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205638   61070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 01:04:55.205881   61070 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:55.205903   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:55.206124   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.208572   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209012   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.209037   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209218   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.209499   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209693   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209832   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.210010   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.210249   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.210263   61070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:55.317027   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:55.317067   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317403   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:04:55.317441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317700   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.320886   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321301   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.321330   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321443   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.321643   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.321853   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.322010   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.322169   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.322343   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.322360   61070 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-674057 && echo "no-preload-674057" | sudo tee /etc/hostname
	I0924 01:04:55.439098   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-674057
	
	I0924 01:04:55.439134   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.441909   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442212   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.442256   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442430   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.442667   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.442890   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.443078   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.443301   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.443460   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.443474   61070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-674057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-674057/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-674057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:55.558172   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:55.558204   61070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:55.558225   61070 buildroot.go:174] setting up certificates
	I0924 01:04:55.558236   61070 provision.go:84] configureAuth start
	I0924 01:04:55.558248   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.558574   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.561503   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.561891   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.561917   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.562089   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.564426   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564800   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.564825   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564958   61070 provision.go:143] copyHostCerts
	I0924 01:04:55.565009   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:55.565018   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:55.565074   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:55.565167   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:55.565175   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:55.565194   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:55.565253   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:55.565263   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:55.565285   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:55.565372   61070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.no-preload-674057 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-674057]
	I0924 01:04:55.649690   61070 provision.go:177] copyRemoteCerts
	I0924 01:04:55.649750   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:55.649774   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.652790   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653249   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.653278   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653567   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.653772   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.653936   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.654059   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:55.738522   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:55.764045   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:04:55.788225   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:55.811207   61070 provision.go:87] duration metric: took 252.958643ms to configureAuth
	I0924 01:04:55.811233   61070 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:55.811415   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:55.811503   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.814921   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815366   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.815400   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815597   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.815826   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816039   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816212   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.816496   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.816740   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.816756   61070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:56.045600   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:56.045632   61070 machine.go:96] duration metric: took 839.736907ms to provisionDockerMachine
	I0924 01:04:56.045646   61070 start.go:293] postStartSetup for "no-preload-674057" (driver="kvm2")
	I0924 01:04:56.045660   61070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:56.045679   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.045997   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:56.046027   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.049081   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049522   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.049559   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049743   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.049960   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.050105   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.050245   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.136652   61070 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:56.140894   61070 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:56.140920   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:56.140987   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:56.141071   61070 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:56.141161   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:56.151170   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:56.179268   61070 start.go:296] duration metric: took 133.605527ms for postStartSetup
	I0924 01:04:56.179318   61070 fix.go:56] duration metric: took 19.461975001s for fixHost
	I0924 01:04:56.179344   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.182567   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.182902   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.182927   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.183091   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.183320   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183562   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183720   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.183865   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:56.184036   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:56.184045   61070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:56.289079   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139896.261476318
	
	I0924 01:04:56.289113   61070 fix.go:216] guest clock: 1727139896.261476318
	I0924 01:04:56.289121   61070 fix.go:229] Guest: 2024-09-24 01:04:56.261476318 +0000 UTC Remote: 2024-09-24 01:04:56.17932382 +0000 UTC m=+357.500342999 (delta=82.152498ms)
	I0924 01:04:56.289141   61070 fix.go:200] guest clock delta is within tolerance: 82.152498ms
	I0924 01:04:56.289156   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 19.57184993s
	I0924 01:04:56.289175   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.289441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:56.292799   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293122   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.293148   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293327   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293832   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293990   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.294073   61070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:56.294108   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.294271   61070 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:56.294299   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.296962   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297113   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297300   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297325   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297473   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297504   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297526   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297665   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297737   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297858   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297926   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.297968   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.298044   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.298139   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.373014   61070 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:56.412487   61070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:56.558755   61070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:56.565187   61070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:56.565245   61070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:56.582073   61070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:56.582102   61070 start.go:495] detecting cgroup driver to use...
	I0924 01:04:56.582167   61070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:56.597553   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:56.612515   61070 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:56.612564   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:56.627596   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:56.641619   61070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:56.762636   61070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:56.917742   61070 docker.go:233] disabling docker service ...
	I0924 01:04:56.917821   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:56.934585   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:56.949194   61070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:57.085465   61070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:57.230529   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:57.245369   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:57.265137   61070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:57.265196   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.276878   61070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:57.276936   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.288934   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.300690   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.312392   61070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:57.324491   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.335619   61070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.352868   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.363280   61070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:57.372811   61070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:57.372866   61070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:57.385797   61070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:57.395936   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:57.532086   61070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:57.628275   61070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:57.628370   61070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:57.633679   61070 start.go:563] Will wait 60s for crictl version
	I0924 01:04:57.633761   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:57.637574   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:57.679667   61070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:57.679756   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.707710   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.738651   61070 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:57.740120   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:57.743379   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.743783   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:57.743814   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.744048   61070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:57.748516   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:57.762723   61070 kubeadm.go:883] updating cluster {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:57.762864   61070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:57.762906   61070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:57.798232   61070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:57.798260   61070 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:57.798334   61070 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.798357   61070 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.798377   61070 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:57.798340   61070 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.798397   61070 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.798381   61070 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.799819   61070 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.799826   61070 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.799840   61070 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799893   61070 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 01:04:57.799902   61070 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.799903   61070 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.027261   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.028437   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 01:04:58.051940   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.082860   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.088073   61070 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 01:04:58.088121   61070 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.088190   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.095081   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.098388   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.152389   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.190893   61070 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 01:04:58.190920   61070 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 01:04:58.190934   61070 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.190944   61070 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.190984   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191029   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.190988   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191080   61070 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 01:04:58.191109   61070 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.191134   61070 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 01:04:58.191144   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191157   61070 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.191185   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219642   61070 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 01:04:58.219689   61070 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.219703   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.219729   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219741   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.219745   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.250341   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.250394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.320188   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.320222   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.320308   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.320394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.383126   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.383327   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.453833   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.453918   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.453878   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.453923   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.499994   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.500027   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 01:04:58.500119   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.583372   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 01:04:58.583491   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:04:58.586213   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 01:04:58.586281   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.586325   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:04:58.586328   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 01:04:58.586405   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:04:58.616022   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 01:04:58.616061   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 01:04:58.616082   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.616118   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 01:04:58.616131   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:04:58.616180   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 01:04:58.616128   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.647507   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 01:04:58.647576   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 01:04:58.647620   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 01:04:58.647659   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:04:54.527399   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.028355   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.407381   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:59.908596   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:56.222908   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.722701   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.222762   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.722814   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.222671   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.722746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.222961   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.723335   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.223393   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.722739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.003431   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815541   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.199297236s)
	I0924 01:05:00.815566   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.167859705s)
	I0924 01:05:00.815579   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 01:05:00.815599   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 01:05:00.815619   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815625   61070 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812143064s)
	I0924 01:05:00.815674   61070 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 01:05:00.815687   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815710   61070 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815750   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:05:02.782328   61070 ssh_runner.go:235] Completed: which crictl: (1.966554191s)
	I0924 01:05:02.782392   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.966688239s)
	I0924 01:05:02.782421   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 01:05:02.782445   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782497   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782404   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:59.529167   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.531324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.028305   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:02.407051   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.475255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.222765   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.722729   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.223407   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.722799   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.223381   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.723427   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.223157   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.723069   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.223400   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.723739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.773493   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.990910382s)
	I0924 01:05:04.773540   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.99101415s)
	I0924 01:05:04.773560   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 01:05:04.773577   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:04.773584   61070 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:04.773615   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:08.061466   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.287832238s)
	I0924 01:05:08.061499   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 01:05:08.061510   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.287911454s)
	I0924 01:05:08.061595   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:08.061520   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:08.061690   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:06.029255   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.527617   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.907268   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.907464   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.223395   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.723345   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.222965   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.722795   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.222933   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.723687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.223526   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.723684   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.223275   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.723534   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.041517   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.979809714s)
	I0924 01:05:10.041549   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 01:05:10.041577   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.979956931s)
	I0924 01:05:10.041625   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 01:05:10.041582   61070 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041714   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041727   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005649   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.963906504s)
	I0924 01:05:12.005689   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 01:05:12.005696   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963951454s)
	I0924 01:05:12.005720   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 01:05:12.005727   61070 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005768   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.960728   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 01:05:12.960771   61070 cache_images.go:123] Successfully loaded all cached images
	I0924 01:05:12.960778   61070 cache_images.go:92] duration metric: took 15.162496206s to LoadCachedImages
	I0924 01:05:12.960791   61070 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.1 crio true true} ...
	I0924 01:05:12.960931   61070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-674057 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:05:12.961013   61070 ssh_runner.go:195] Run: crio config
	I0924 01:05:13.006511   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:13.006535   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:13.006551   61070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:05:13.006579   61070 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-674057 NodeName:no-preload-674057 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:05:13.006729   61070 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-674057"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:05:13.006799   61070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:05:13.017598   61070 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:05:13.017672   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:05:13.027414   61070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 01:05:13.044688   61070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:05:13.061646   61070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 01:05:13.079552   61070 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0924 01:05:13.083172   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:05:13.095232   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:05:13.207184   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:05:13.222851   61070 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057 for IP: 192.168.50.161
	I0924 01:05:13.222880   61070 certs.go:194] generating shared ca certs ...
	I0924 01:05:13.222901   61070 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:05:13.223084   61070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:05:13.223184   61070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:05:13.223195   61070 certs.go:256] generating profile certs ...
	I0924 01:05:13.223314   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.key
	I0924 01:05:13.223394   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key.8fa8fb95
	I0924 01:05:13.223445   61070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key
	I0924 01:05:13.223614   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:05:13.223654   61070 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:05:13.223710   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:05:13.223756   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:05:13.223785   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:05:13.223818   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:05:13.223862   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:05:13.224549   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:05:13.273224   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:05:13.311069   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:05:13.342314   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:05:13.369345   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:05:13.395466   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:05:13.424307   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:05:13.448531   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:05:13.472491   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:05:13.496060   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:05:13.521182   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:05:13.548194   61070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:05:13.566423   61070 ssh_runner.go:195] Run: openssl version
	I0924 01:05:13.572605   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:05:13.583991   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588705   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588771   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.594828   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:05:13.606168   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:05:13.617723   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622697   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622762   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.628486   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:05:13.639176   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:05:13.650161   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654546   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654625   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.660382   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:05:13.671487   61070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:05:13.676226   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:05:13.682591   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:05:13.688492   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:05:13.694726   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:05:13.700432   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:05:13.706080   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:05:13.712226   61070 kubeadm.go:392] StartCluster: {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:05:13.712323   61070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:05:13.712421   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:11.028779   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.527996   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:10.908227   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.408515   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:11.223272   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.723442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.223301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.723151   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.223174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.722780   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.222777   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.722987   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.223654   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.723449   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.757518   61070 cri.go:89] found id: ""
	I0924 01:05:13.757597   61070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:05:13.768318   61070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:05:13.768367   61070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:05:13.768416   61070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:05:13.778918   61070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:05:13.780385   61070 kubeconfig.go:125] found "no-preload-674057" server: "https://192.168.50.161:8443"
	I0924 01:05:13.783392   61070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:05:13.794016   61070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0924 01:05:13.794050   61070 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:05:13.794085   61070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:05:13.794150   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:13.833511   61070 cri.go:89] found id: ""
	I0924 01:05:13.833596   61070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:05:13.851608   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:05:13.861469   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:05:13.861510   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:05:13.861552   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:05:13.870700   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:05:13.870770   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:05:13.880613   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:05:13.890336   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:05:13.890404   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:05:13.900172   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.910408   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:05:13.910475   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.919980   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:05:13.929398   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:05:13.929495   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:05:13.938894   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:05:13.948749   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:14.056463   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.345268   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.288763261s)
	I0924 01:05:15.345317   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.555986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.626986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.697665   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:05:15.697761   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.198410   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.698860   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.715727   61070 api_server.go:72] duration metric: took 1.018058771s to wait for apiserver process to appear ...
	I0924 01:05:16.715756   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:05:16.715779   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:15.528157   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.528680   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:15.906930   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.907223   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:16.223623   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.723625   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.223541   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.722702   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.222919   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.722982   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.222978   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.723547   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.223112   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.723562   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.716809   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:21.716852   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:19.528769   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.028695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:20.406693   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.407036   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:24.906735   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:21.223058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.722680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.223693   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.722716   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.223387   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.722910   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.223608   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.723144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.223442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.723025   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.717768   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:26.717811   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:24.527568   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.527806   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.028455   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:27.406994   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.906590   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.723271   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.223163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.723283   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.723174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.222803   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.723029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.223679   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.723058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.718277   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:31.718317   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:31.028690   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:33.527675   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.906723   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:34.406306   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.223465   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.723438   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.223673   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.722674   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.223289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.723651   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.223014   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.723518   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.222860   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.723642   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.718676   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:36.718716   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.146737   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": read tcp 192.168.50.1:59880->192.168.50.161:8443: read: connection reset by peer
	I0924 01:05:37.215865   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.216506   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:37.716052   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.716731   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:38.216296   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:36.028537   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.032544   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.406928   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.407201   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.222680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.723015   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.222736   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.723185   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.223070   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.723237   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.223640   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.723622   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.222705   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.722909   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.217518   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:43.217557   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:40.527577   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:43.027715   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:40.906522   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:42.906906   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:44.907623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:41.223105   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.723166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.223286   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.723048   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.223278   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.723301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.222712   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.723191   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.223720   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.723044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:48.217915   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:48.217982   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:45.028780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.028883   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.406680   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:49.907776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:46.223270   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.722902   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:46.722980   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:46.781519   61989 cri.go:89] found id: ""
	I0924 01:05:46.781551   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.781565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:46.781574   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:46.781630   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:46.815990   61989 cri.go:89] found id: ""
	I0924 01:05:46.816021   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.816030   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:46.816035   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:46.816082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:46.848951   61989 cri.go:89] found id: ""
	I0924 01:05:46.848980   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.848989   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:46.848995   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:46.849062   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:46.880731   61989 cri.go:89] found id: ""
	I0924 01:05:46.880756   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.880764   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:46.880770   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:46.880832   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:46.915975   61989 cri.go:89] found id: ""
	I0924 01:05:46.916004   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.916014   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:46.916036   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:46.916105   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:46.954124   61989 cri.go:89] found id: ""
	I0924 01:05:46.954154   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.954162   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:46.954168   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:46.954233   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:46.990454   61989 cri.go:89] found id: ""
	I0924 01:05:46.990489   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.990498   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:46.990504   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:46.990573   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:47.024099   61989 cri.go:89] found id: ""
	I0924 01:05:47.024137   61989 logs.go:276] 0 containers: []
	W0924 01:05:47.024150   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:47.024161   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:47.024176   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:47.153050   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:47.153076   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:47.153109   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:47.223472   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:47.223511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:47.267699   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:47.267729   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:47.314741   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:47.314773   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:49.828972   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:49.842301   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:49.842378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:49.874632   61989 cri.go:89] found id: ""
	I0924 01:05:49.874659   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.874669   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:49.874676   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:49.874734   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:49.912500   61989 cri.go:89] found id: ""
	I0924 01:05:49.912524   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.912532   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:49.912543   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:49.912592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:49.947297   61989 cri.go:89] found id: ""
	I0924 01:05:49.947320   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.947328   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:49.947334   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:49.947395   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:49.983863   61989 cri.go:89] found id: ""
	I0924 01:05:49.983892   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.983905   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:49.983915   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:49.983977   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:50.022997   61989 cri.go:89] found id: ""
	I0924 01:05:50.023031   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.023044   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:50.023053   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:50.023109   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:50.057829   61989 cri.go:89] found id: ""
	I0924 01:05:50.057863   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.057875   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:50.057882   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:50.057929   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:50.114599   61989 cri.go:89] found id: ""
	I0924 01:05:50.114620   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.114628   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:50.114633   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:50.114677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:50.147294   61989 cri.go:89] found id: ""
	I0924 01:05:50.147326   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.147334   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:50.147345   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:50.147378   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:50.198362   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:50.198402   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:50.212381   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:50.212415   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:50.286216   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:50.286261   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:50.286279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:50.366794   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:50.366827   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:53.218617   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:53.218653   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:49.527980   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.027425   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.027780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:51.908078   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.406891   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.908167   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:52.922279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:52.922353   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:52.956677   61989 cri.go:89] found id: ""
	I0924 01:05:52.956708   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.956720   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:52.956727   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:52.956778   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:52.990933   61989 cri.go:89] found id: ""
	I0924 01:05:52.990956   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.990964   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:52.990970   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:52.991019   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:53.025729   61989 cri.go:89] found id: ""
	I0924 01:05:53.025758   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.025768   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:53.025778   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:53.025838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:53.060238   61989 cri.go:89] found id: ""
	I0924 01:05:53.060269   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.060279   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:53.060287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:53.060366   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:53.094166   61989 cri.go:89] found id: ""
	I0924 01:05:53.094200   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.094212   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:53.094220   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:53.094289   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:53.129857   61989 cri.go:89] found id: ""
	I0924 01:05:53.129884   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.129892   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:53.129898   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:53.129955   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:53.165857   61989 cri.go:89] found id: ""
	I0924 01:05:53.165890   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.165898   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:53.165909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:53.165970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:53.203884   61989 cri.go:89] found id: ""
	I0924 01:05:53.203909   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.203917   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:53.203926   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:53.203937   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:53.258001   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:53.258035   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:53.271584   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:53.271620   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:53.341791   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:53.341811   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:53.341824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:53.424126   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:53.424170   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:55.962067   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:55.977964   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:55.978042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:56.277329   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:05:56.277366   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:05:56.277385   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.302576   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.302628   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:56.715873   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.722458   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.722487   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.216714   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.224426   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:57.224474   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.715976   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.725067   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:05:57.734749   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:05:57.734782   61070 api_server.go:131] duration metric: took 41.019017744s to wait for apiserver health ...
	I0924 01:05:57.734793   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:57.734801   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:57.736798   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:05:57.738285   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:05:57.750654   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:05:57.778587   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:05:57.804858   61070 system_pods.go:59] 8 kube-system pods found
	I0924 01:05:57.804907   61070 system_pods.go:61] "coredns-7c65d6cfc9-kshwz" [4393c6ec-abd9-42ce-af67-9e8b768bd49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:05:57.804917   61070 system_pods.go:61] "etcd-no-preload-674057" [65cf3acb-8ffa-4f83-8ab9-86ddefc5d829] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:05:57.804932   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [7d26a065-faa1-4ba2-96b7-6c9b1ccb5386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:05:57.804940   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [7c5c6602-1749-4f34-bb63-08161baac6db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:05:57.804949   61070 system_pods.go:61] "kube-proxy-fgmwc" [a81419dc-54f5-4bdd-ac2d-f3f7c85b8f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 01:05:57.804955   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [d02c8d9a-1897-4506-8029-9608f11520de] Running
	I0924 01:05:57.804965   61070 system_pods.go:61] "metrics-server-6867b74b74-7gbnr" [6ffa0eb7-21d8-4741-9eae-ce7bb9604dec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:05:57.804975   61070 system_pods.go:61] "storage-provisioner" [a7f99914-8945-4614-afef-d553ea932edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 01:05:57.804984   61070 system_pods.go:74] duration metric: took 26.369156ms to wait for pod list to return data ...
	I0924 01:05:57.804996   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:05:57.809068   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:05:57.809103   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:05:57.809119   61070 node_conditions.go:105] duration metric: took 4.115654ms to run NodePressure ...
	I0924 01:05:57.809137   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:58.173276   61070 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178398   61070 kubeadm.go:739] kubelet initialised
	I0924 01:05:58.178422   61070 kubeadm.go:740] duration metric: took 5.118555ms waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178429   61070 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:05:58.183646   61070 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:05:56.029030   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.029256   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.407889   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.907744   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.014681   61989 cri.go:89] found id: ""
	I0924 01:05:56.014716   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.014728   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:56.014736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:56.014799   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:56.062547   61989 cri.go:89] found id: ""
	I0924 01:05:56.062576   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.062587   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:56.062606   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:56.062665   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:56.100938   61989 cri.go:89] found id: ""
	I0924 01:05:56.100960   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.100969   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:56.100974   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:56.101039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:56.137694   61989 cri.go:89] found id: ""
	I0924 01:05:56.137722   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.137737   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:56.137744   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:56.137803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:56.174876   61989 cri.go:89] found id: ""
	I0924 01:05:56.174911   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.174923   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:56.174931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:56.174990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:56.208870   61989 cri.go:89] found id: ""
	I0924 01:05:56.208895   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.208905   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:56.208913   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:56.208971   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:56.242476   61989 cri.go:89] found id: ""
	I0924 01:05:56.242508   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.242520   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:56.242528   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:56.242590   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:56.276185   61989 cri.go:89] found id: ""
	I0924 01:05:56.276214   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.276255   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:56.276267   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:56.276284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:56.332755   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:56.332792   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:56.346279   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:56.346312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:56.419725   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:56.419751   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:56.419766   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:56.500173   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:56.500208   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.083761   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:59.097184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:59.097247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:59.131734   61989 cri.go:89] found id: ""
	I0924 01:05:59.131764   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.131775   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:59.131782   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:59.131842   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:59.169402   61989 cri.go:89] found id: ""
	I0924 01:05:59.169429   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.169439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:59.169446   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:59.169521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:59.208235   61989 cri.go:89] found id: ""
	I0924 01:05:59.208260   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.208290   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:59.208298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:59.208372   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:59.242314   61989 cri.go:89] found id: ""
	I0924 01:05:59.242345   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.242358   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:59.242367   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:59.242433   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:59.281300   61989 cri.go:89] found id: ""
	I0924 01:05:59.281327   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.281337   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:59.281344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:59.281407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:59.315336   61989 cri.go:89] found id: ""
	I0924 01:05:59.315369   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.315377   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:59.315386   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:59.315445   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:59.347678   61989 cri.go:89] found id: ""
	I0924 01:05:59.347708   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.347718   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:59.347726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:59.347786   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:59.381296   61989 cri.go:89] found id: ""
	I0924 01:05:59.381328   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.381340   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:59.381352   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:59.381369   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:59.462939   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:59.462971   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:59.462990   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:59.544967   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:59.545004   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.585079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:59.585106   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:59.637897   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:59.637940   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:00.190924   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.192627   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.192648   61070 pod_ready.go:82] duration metric: took 4.008971718s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.192658   61070 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198586   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.198614   61070 pod_ready.go:82] duration metric: took 5.949433ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198627   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205306   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:03.205331   61070 pod_ready.go:82] duration metric: took 1.006696778s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205342   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:00.528770   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.529473   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:01.406620   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:03.407024   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.153289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:02.170582   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:02.170679   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:02.216700   61989 cri.go:89] found id: ""
	I0924 01:06:02.216722   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.216730   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:02.216736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:02.216793   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:02.292664   61989 cri.go:89] found id: ""
	I0924 01:06:02.292695   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.292706   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:02.292714   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:02.292780   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:02.349447   61989 cri.go:89] found id: ""
	I0924 01:06:02.349470   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.349481   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:02.349487   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:02.349557   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:02.390491   61989 cri.go:89] found id: ""
	I0924 01:06:02.390514   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.390535   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:02.390543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:02.390597   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:02.439330   61989 cri.go:89] found id: ""
	I0924 01:06:02.439355   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.439366   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:02.439373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:02.439432   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:02.476400   61989 cri.go:89] found id: ""
	I0924 01:06:02.476431   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.476439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:02.476445   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:02.476501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:02.511946   61989 cri.go:89] found id: ""
	I0924 01:06:02.511975   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.511983   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:02.511989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:02.512036   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:02.547526   61989 cri.go:89] found id: ""
	I0924 01:06:02.547554   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.547561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:02.547570   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:02.547580   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:02.619784   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:02.619805   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:02.619816   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:02.698597   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:02.698636   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:02.741381   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:02.741419   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:02.797965   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:02.798023   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.312059   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:05.326556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:05.326614   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:05.360973   61989 cri.go:89] found id: ""
	I0924 01:06:05.360999   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.361011   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:05.361018   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:05.361101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:05.394720   61989 cri.go:89] found id: ""
	I0924 01:06:05.394750   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.394760   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:05.394767   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:05.394831   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:05.432564   61989 cri.go:89] found id: ""
	I0924 01:06:05.432592   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.432603   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:05.432611   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:05.432673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:05.465424   61989 cri.go:89] found id: ""
	I0924 01:06:05.465467   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.465478   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:05.465484   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:05.465555   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:05.503656   61989 cri.go:89] found id: ""
	I0924 01:06:05.503684   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.503693   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:05.503699   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:05.503752   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:05.538128   61989 cri.go:89] found id: ""
	I0924 01:06:05.538160   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.538171   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:05.538179   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:05.538248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:05.571310   61989 cri.go:89] found id: ""
	I0924 01:06:05.571336   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.571346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:05.571353   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:05.571416   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:05.604038   61989 cri.go:89] found id: ""
	I0924 01:06:05.604062   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.604070   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:05.604079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:05.604094   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:05.657025   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:05.657068   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.671457   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:05.671483   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:05.747671   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:05.747701   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:05.747718   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:05.833248   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:05.833285   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:05.212622   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.711612   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.028130   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.527525   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.407057   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.407341   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.906549   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:08.372029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:08.386497   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:08.386564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:08.422998   61989 cri.go:89] found id: ""
	I0924 01:06:08.423029   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.423039   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:08.423047   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:08.423095   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:08.457009   61989 cri.go:89] found id: ""
	I0924 01:06:08.457037   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.457047   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:08.457052   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:08.457104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:08.489694   61989 cri.go:89] found id: ""
	I0924 01:06:08.489728   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.489740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:08.489750   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:08.489804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:08.521819   61989 cri.go:89] found id: ""
	I0924 01:06:08.521845   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.521856   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:08.521864   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:08.521922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:08.556422   61989 cri.go:89] found id: ""
	I0924 01:06:08.556453   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.556465   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:08.556472   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:08.556567   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:08.593802   61989 cri.go:89] found id: ""
	I0924 01:06:08.593828   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.593836   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:08.593842   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:08.593932   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:08.627569   61989 cri.go:89] found id: ""
	I0924 01:06:08.627592   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.627600   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:08.627605   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:08.627653   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:08.664728   61989 cri.go:89] found id: ""
	I0924 01:06:08.664758   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.664769   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:08.664780   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:08.664794   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.703546   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:08.703577   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:08.755612   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:08.755649   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:08.769957   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:08.769989   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:08.842732   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:08.842762   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:08.842789   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:10.211942   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.211973   61070 pod_ready.go:82] duration metric: took 7.006623705s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.211986   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217219   61070 pod_ready.go:93] pod "kube-proxy-fgmwc" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.217247   61070 pod_ready.go:82] duration metric: took 5.254551ms for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217260   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221959   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.221983   61070 pod_ready.go:82] duration metric: took 4.71607ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221996   61070 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:12.227911   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.527831   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.527917   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.028599   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.907394   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.407242   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.427424   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:11.440709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:11.440773   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:11.475537   61989 cri.go:89] found id: ""
	I0924 01:06:11.475564   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.475572   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:11.475577   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:11.475633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:11.512231   61989 cri.go:89] found id: ""
	I0924 01:06:11.512276   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.512285   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:11.512292   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:11.512365   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:11.549809   61989 cri.go:89] found id: ""
	I0924 01:06:11.549840   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.549852   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:11.549858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:11.549924   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:11.587451   61989 cri.go:89] found id: ""
	I0924 01:06:11.587481   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.587493   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:11.587500   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:11.587558   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:11.625109   61989 cri.go:89] found id: ""
	I0924 01:06:11.625135   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.625146   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:11.625154   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:11.625213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:11.660577   61989 cri.go:89] found id: ""
	I0924 01:06:11.660604   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.660616   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:11.660624   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:11.660683   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:11.703527   61989 cri.go:89] found id: ""
	I0924 01:06:11.703557   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.703569   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:11.703577   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:11.703646   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:11.740766   61989 cri.go:89] found id: ""
	I0924 01:06:11.740798   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.740810   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:11.740820   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:11.740836   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:11.803402   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:11.803448   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:11.819144   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:11.819178   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:11.896152   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:11.896173   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:11.896187   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.986284   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:11.986340   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.523669   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:14.537923   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:14.537990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:14.576092   61989 cri.go:89] found id: ""
	I0924 01:06:14.576128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.576140   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:14.576148   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:14.576213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:14.611985   61989 cri.go:89] found id: ""
	I0924 01:06:14.612020   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.612032   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:14.612039   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:14.612098   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:14.647640   61989 cri.go:89] found id: ""
	I0924 01:06:14.647667   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.647675   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:14.647682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:14.647746   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:14.685089   61989 cri.go:89] found id: ""
	I0924 01:06:14.685128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.685141   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:14.685150   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:14.685217   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:14.718694   61989 cri.go:89] found id: ""
	I0924 01:06:14.718729   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.718738   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:14.718745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:14.718810   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:14.754874   61989 cri.go:89] found id: ""
	I0924 01:06:14.754916   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.754928   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:14.754936   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:14.754993   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:14.789580   61989 cri.go:89] found id: ""
	I0924 01:06:14.789608   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.789617   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:14.789625   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:14.789677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:14.823173   61989 cri.go:89] found id: ""
	I0924 01:06:14.823201   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.823213   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:14.823224   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:14.823238   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:14.878398   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:14.878431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:14.892466   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:14.892502   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:14.965978   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:14.966010   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:14.966065   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:15.050557   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:15.050600   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.231644   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.728219   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.029325   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:18.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.907014   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:19.406893   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:17.596915   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:17.609585   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:17.609643   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:17.648275   61989 cri.go:89] found id: ""
	I0924 01:06:17.648305   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.648313   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:17.648319   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:17.648447   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:17.681447   61989 cri.go:89] found id: ""
	I0924 01:06:17.681473   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.681484   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:17.681491   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:17.681552   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:17.719202   61989 cri.go:89] found id: ""
	I0924 01:06:17.719226   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.719234   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:17.719240   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:17.719296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:17.752601   61989 cri.go:89] found id: ""
	I0924 01:06:17.752629   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.752641   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:17.752649   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:17.752700   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:17.789905   61989 cri.go:89] found id: ""
	I0924 01:06:17.789934   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.789945   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:17.789952   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:17.790015   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:17.824174   61989 cri.go:89] found id: ""
	I0924 01:06:17.824205   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.824217   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:17.824237   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:17.824296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:17.860647   61989 cri.go:89] found id: ""
	I0924 01:06:17.860674   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.860684   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:17.860691   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:17.860750   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:17.896392   61989 cri.go:89] found id: ""
	I0924 01:06:17.896414   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.896423   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:17.896437   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:17.896450   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:17.949230   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:17.949272   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:17.963125   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:17.963183   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:18.035092   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:18.035117   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:18.035134   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:18.117973   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:18.118011   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:20.657044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:20.669862   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:20.669936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:20.704672   61989 cri.go:89] found id: ""
	I0924 01:06:20.704703   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.704714   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:20.704722   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:20.704785   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:20.745777   61989 cri.go:89] found id: ""
	I0924 01:06:20.745801   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.745811   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:20.745818   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:20.745879   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:20.779673   61989 cri.go:89] found id: ""
	I0924 01:06:20.779704   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.779740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:20.779749   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:20.779809   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:20.815959   61989 cri.go:89] found id: ""
	I0924 01:06:20.815983   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.815992   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:20.815998   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:20.816055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:20.849203   61989 cri.go:89] found id: ""
	I0924 01:06:20.849232   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.849243   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:20.849251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:20.849319   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:20.884303   61989 cri.go:89] found id: ""
	I0924 01:06:20.884353   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.884365   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:20.884373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:20.884436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:20.921217   61989 cri.go:89] found id: ""
	I0924 01:06:20.921242   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.921249   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:20.921255   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:20.921302   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:20.957555   61989 cri.go:89] found id: ""
	I0924 01:06:20.957590   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.957601   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:20.957613   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:20.957628   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:20.972591   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:20.972630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:06:18.728553   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.730046   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.228040   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.527573   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:22.527695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:21.406963   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.907730   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	W0924 01:06:21.046506   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:21.046532   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:21.046547   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:21.129415   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:21.129453   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:21.168899   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:21.168924   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:23.720925   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:23.736893   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:23.736965   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:23.771874   61989 cri.go:89] found id: ""
	I0924 01:06:23.771901   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.771909   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:23.771915   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:23.771976   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:23.806892   61989 cri.go:89] found id: ""
	I0924 01:06:23.806924   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.806936   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:23.806943   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:23.806999   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:23.843661   61989 cri.go:89] found id: ""
	I0924 01:06:23.843686   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.843694   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:23.843700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:23.843753   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:23.878979   61989 cri.go:89] found id: ""
	I0924 01:06:23.879007   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.879019   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:23.879027   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:23.879086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:23.913893   61989 cri.go:89] found id: ""
	I0924 01:06:23.913916   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.913925   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:23.913937   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:23.913982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:23.947932   61989 cri.go:89] found id: ""
	I0924 01:06:23.947961   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.947972   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:23.947980   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:23.948045   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:23.981366   61989 cri.go:89] found id: ""
	I0924 01:06:23.981391   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.981402   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:23.981409   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:23.981467   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:24.014428   61989 cri.go:89] found id: ""
	I0924 01:06:24.014455   61989 logs.go:276] 0 containers: []
	W0924 01:06:24.014463   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:24.014471   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:24.014485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:24.029585   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:24.029621   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:24.095926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:24.095955   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:24.095975   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:24.174594   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:24.174635   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:24.213286   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:24.213311   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:25.229785   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.729021   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:25.027783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.030450   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.406776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:28.907135   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.764740   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:26.777184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:26.777279   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:26.812704   61989 cri.go:89] found id: ""
	I0924 01:06:26.812735   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.812746   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:26.812753   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:26.812811   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:26.849867   61989 cri.go:89] found id: ""
	I0924 01:06:26.849895   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.849904   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:26.849909   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:26.849958   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:26.882856   61989 cri.go:89] found id: ""
	I0924 01:06:26.882878   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.882885   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:26.882891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:26.882936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:26.921063   61989 cri.go:89] found id: ""
	I0924 01:06:26.921085   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.921094   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:26.921100   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:26.921156   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:26.961154   61989 cri.go:89] found id: ""
	I0924 01:06:26.961182   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.961194   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:26.961200   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:26.961257   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:26.994560   61989 cri.go:89] found id: ""
	I0924 01:06:26.994593   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.994603   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:26.994612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:26.994673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:27.027967   61989 cri.go:89] found id: ""
	I0924 01:06:27.028013   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.028026   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:27.028033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:27.028096   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:27.063099   61989 cri.go:89] found id: ""
	I0924 01:06:27.063130   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.063142   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:27.063153   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:27.063166   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:27.116237   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:27.116279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:27.130785   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:27.130815   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:27.201931   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:27.201954   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:27.201970   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:27.282182   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:27.282217   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:29.825403   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:29.838890   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:29.838989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:29.873651   61989 cri.go:89] found id: ""
	I0924 01:06:29.873678   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.873690   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:29.873698   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:29.873758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:29.909894   61989 cri.go:89] found id: ""
	I0924 01:06:29.909916   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.909923   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:29.909929   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:29.909978   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:29.944850   61989 cri.go:89] found id: ""
	I0924 01:06:29.944878   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.944886   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:29.944892   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:29.944945   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:29.981486   61989 cri.go:89] found id: ""
	I0924 01:06:29.981515   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.981524   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:29.981532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:29.981592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:30.015138   61989 cri.go:89] found id: ""
	I0924 01:06:30.015165   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.015176   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:30.015184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:30.015256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:30.051777   61989 cri.go:89] found id: ""
	I0924 01:06:30.051814   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.051825   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:30.051834   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:30.051898   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:30.085573   61989 cri.go:89] found id: ""
	I0924 01:06:30.085598   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.085607   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:30.085612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:30.085661   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:30.122518   61989 cri.go:89] found id: ""
	I0924 01:06:30.122551   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.122561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:30.122570   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:30.122585   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:30.199075   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:30.199118   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:30.238259   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:30.238293   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:30.292145   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:30.292185   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:30.306404   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:30.306431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:30.373959   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:29.729379   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.228691   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:29.527089   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:31.527523   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:34.027357   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:30.907575   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:33.407615   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.875041   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:32.888358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:32.888435   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:32.924466   61989 cri.go:89] found id: ""
	I0924 01:06:32.924499   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.924519   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:32.924528   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:32.924584   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:32.960188   61989 cri.go:89] found id: ""
	I0924 01:06:32.960216   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.960224   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:32.960231   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:32.960282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:32.997612   61989 cri.go:89] found id: ""
	I0924 01:06:32.997641   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.997649   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:32.997655   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:32.997704   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:33.034282   61989 cri.go:89] found id: ""
	I0924 01:06:33.034310   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.034317   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:33.034325   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:33.034381   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:33.073832   61989 cri.go:89] found id: ""
	I0924 01:06:33.073861   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.073870   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:33.073875   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:33.073959   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:33.107276   61989 cri.go:89] found id: ""
	I0924 01:06:33.107303   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.107314   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:33.107323   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:33.107373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:33.141062   61989 cri.go:89] found id: ""
	I0924 01:06:33.141091   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.141104   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:33.141112   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:33.141174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:33.177874   61989 cri.go:89] found id: ""
	I0924 01:06:33.177899   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.177908   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:33.177916   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:33.177927   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:33.228324   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:33.228373   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:33.241324   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:33.241350   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:33.313115   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:33.313139   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:33.313151   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:33.392458   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:33.392512   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:35.932822   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:35.945918   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:35.945987   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:34.727948   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.728560   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.028536   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:38.527308   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.906501   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:37.907165   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.984400   61989 cri.go:89] found id: ""
	I0924 01:06:35.984438   61989 logs.go:276] 0 containers: []
	W0924 01:06:35.984448   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:35.984456   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:35.984528   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:36.022208   61989 cri.go:89] found id: ""
	I0924 01:06:36.022235   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.022244   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:36.022252   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:36.022336   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:36.059153   61989 cri.go:89] found id: ""
	I0924 01:06:36.059176   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.059184   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:36.059190   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:36.059247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:36.094375   61989 cri.go:89] found id: ""
	I0924 01:06:36.094413   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.094425   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:36.094434   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:36.094490   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:36.128662   61989 cri.go:89] found id: ""
	I0924 01:06:36.128691   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.128702   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:36.128710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:36.128762   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:36.160898   61989 cri.go:89] found id: ""
	I0924 01:06:36.160925   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.160937   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:36.160945   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:36.161010   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:36.194421   61989 cri.go:89] found id: ""
	I0924 01:06:36.194448   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.194460   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:36.194468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:36.194537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:36.230448   61989 cri.go:89] found id: ""
	I0924 01:06:36.230477   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.230487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:36.230498   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:36.230511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:36.303029   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:36.303053   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:36.303067   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:36.406305   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:36.406338   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:36.444044   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:36.444084   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:36.494829   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:36.494873   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.009579   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:39.023867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:39.023943   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:39.057426   61989 cri.go:89] found id: ""
	I0924 01:06:39.057458   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.057469   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:39.057477   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:39.057539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:39.091421   61989 cri.go:89] found id: ""
	I0924 01:06:39.091444   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.091453   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:39.091459   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:39.091518   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:39.125407   61989 cri.go:89] found id: ""
	I0924 01:06:39.125437   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.125448   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:39.125455   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:39.125525   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:39.157146   61989 cri.go:89] found id: ""
	I0924 01:06:39.157170   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.157181   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:39.157189   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:39.157248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:39.189474   61989 cri.go:89] found id: ""
	I0924 01:06:39.189501   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.189511   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:39.189518   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:39.189577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:39.228034   61989 cri.go:89] found id: ""
	I0924 01:06:39.228063   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.228084   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:39.228099   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:39.228158   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:39.268289   61989 cri.go:89] found id: ""
	I0924 01:06:39.268317   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.268345   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:39.268354   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:39.268431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:39.304964   61989 cri.go:89] found id: ""
	I0924 01:06:39.304988   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.304996   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:39.305005   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:39.305017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:39.356193   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:39.356234   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.370782   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:39.370807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:39.442395   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:39.442418   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:39.442429   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:39.518426   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:39.518466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:38.729606   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:41.228528   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.528236   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:43.028285   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.407021   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.906884   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:44.907822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.059895   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:42.092776   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:42.092837   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:42.128508   61989 cri.go:89] found id: ""
	I0924 01:06:42.128534   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.128555   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:42.128565   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:42.128623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:42.160961   61989 cri.go:89] found id: ""
	I0924 01:06:42.160989   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.161000   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:42.161008   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:42.161072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:42.194212   61989 cri.go:89] found id: ""
	I0924 01:06:42.194260   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.194272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:42.194280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:42.194342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:42.229284   61989 cri.go:89] found id: ""
	I0924 01:06:42.229312   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.229323   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:42.229331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:42.229378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:42.261952   61989 cri.go:89] found id: ""
	I0924 01:06:42.261986   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.261997   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:42.262010   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:42.262059   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:42.297096   61989 cri.go:89] found id: ""
	I0924 01:06:42.297125   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.297133   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:42.297139   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:42.297185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:42.333066   61989 cri.go:89] found id: ""
	I0924 01:06:42.333095   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.333106   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:42.333114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:42.333176   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:42.366798   61989 cri.go:89] found id: ""
	I0924 01:06:42.366829   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.366840   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:42.366852   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:42.366865   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:42.419424   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:42.419466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:42.433814   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:42.433846   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:42.503817   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:42.503845   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:42.503860   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:42.583249   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:42.583289   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:45.123746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:45.136292   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:45.136377   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:45.174390   61989 cri.go:89] found id: ""
	I0924 01:06:45.174420   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.174441   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:45.174449   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:45.174539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:45.212394   61989 cri.go:89] found id: ""
	I0924 01:06:45.212422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.212433   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:45.212441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:45.212503   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:45.245831   61989 cri.go:89] found id: ""
	I0924 01:06:45.245853   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.245861   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:45.245867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:45.245922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:45.277587   61989 cri.go:89] found id: ""
	I0924 01:06:45.277615   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.277626   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:45.277634   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:45.277692   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:45.309715   61989 cri.go:89] found id: ""
	I0924 01:06:45.309749   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.309760   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:45.309768   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:45.309827   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:45.342799   61989 cri.go:89] found id: ""
	I0924 01:06:45.342831   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.342844   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:45.342853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:45.342921   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:45.375377   61989 cri.go:89] found id: ""
	I0924 01:06:45.375404   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.375415   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:45.375423   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:45.375484   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:45.415395   61989 cri.go:89] found id: ""
	I0924 01:06:45.415422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.415432   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:45.415444   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:45.415459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:45.464381   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:45.464416   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:45.478142   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:45.478168   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:45.551211   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:45.551234   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:45.551244   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:45.635255   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:45.635297   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:43.728645   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:46.227611   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.228320   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:45.028650   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.528968   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.406822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:49.407790   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.173687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:48.186635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:48.186710   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:48.219544   61989 cri.go:89] found id: ""
	I0924 01:06:48.219566   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.219574   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:48.219583   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:48.219654   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:48.253594   61989 cri.go:89] found id: ""
	I0924 01:06:48.253618   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.253627   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:48.253634   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:48.253693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:48.287991   61989 cri.go:89] found id: ""
	I0924 01:06:48.288019   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.288031   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:48.288041   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:48.288100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:48.320738   61989 cri.go:89] found id: ""
	I0924 01:06:48.320767   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.320779   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:48.320787   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:48.320847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:48.352197   61989 cri.go:89] found id: ""
	I0924 01:06:48.352225   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.352233   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:48.352243   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:48.352317   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:48.386157   61989 cri.go:89] found id: ""
	I0924 01:06:48.386187   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.386195   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:48.386202   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:48.386250   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:48.422372   61989 cri.go:89] found id: ""
	I0924 01:06:48.422398   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.422407   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:48.422413   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:48.422463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:48.464007   61989 cri.go:89] found id: ""
	I0924 01:06:48.464032   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.464043   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:48.464054   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:48.464072   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.520533   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:48.520570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:48.594453   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:48.594489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:48.607309   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:48.607336   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:48.674078   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:48.674102   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:48.674117   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:50.740093   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.228567   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:50.028640   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:52.527656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.906378   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.906887   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.256855   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:51.270305   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:51.270378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:51.303450   61989 cri.go:89] found id: ""
	I0924 01:06:51.303487   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.303499   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:51.303508   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:51.303564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:51.336959   61989 cri.go:89] found id: ""
	I0924 01:06:51.336987   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.337003   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:51.337010   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:51.337072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:51.369210   61989 cri.go:89] found id: ""
	I0924 01:06:51.369239   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.369249   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:51.369260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:51.369339   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:51.403595   61989 cri.go:89] found id: ""
	I0924 01:06:51.403645   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.403658   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:51.403666   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:51.403723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:51.445459   61989 cri.go:89] found id: ""
	I0924 01:06:51.445493   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.445503   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:51.445510   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:51.445574   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:51.477615   61989 cri.go:89] found id: ""
	I0924 01:06:51.477642   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.477653   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:51.477660   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:51.477722   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:51.509737   61989 cri.go:89] found id: ""
	I0924 01:06:51.509766   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.509784   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:51.509792   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:51.509856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:51.546451   61989 cri.go:89] found id: ""
	I0924 01:06:51.546479   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.546489   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:51.546501   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:51.546515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:51.600277   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:51.600315   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:51.613403   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:51.613434   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:51.691645   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:51.691669   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:51.691688   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.772276   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:51.772312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:54.313491   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:54.328265   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:54.328374   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:54.368091   61989 cri.go:89] found id: ""
	I0924 01:06:54.368117   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.368126   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:54.368131   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:54.368183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:54.408272   61989 cri.go:89] found id: ""
	I0924 01:06:54.408300   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.408310   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:54.408318   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:54.408409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:54.460467   61989 cri.go:89] found id: ""
	I0924 01:06:54.460489   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.460499   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:54.460506   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:54.460564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:54.493310   61989 cri.go:89] found id: ""
	I0924 01:06:54.493334   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.493343   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:54.493349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:54.493401   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:54.526772   61989 cri.go:89] found id: ""
	I0924 01:06:54.526799   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.526809   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:54.526817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:54.526880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:54.562235   61989 cri.go:89] found id: ""
	I0924 01:06:54.562264   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.562274   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:54.562283   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:54.562345   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:54.597755   61989 cri.go:89] found id: ""
	I0924 01:06:54.597784   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.597794   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:54.597803   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:54.597851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:54.632225   61989 cri.go:89] found id: ""
	I0924 01:06:54.632282   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.632295   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:54.632305   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:54.632321   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:54.683849   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:54.683887   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:54.697395   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:54.697425   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:54.767577   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:54.767598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:54.767609   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:54.842619   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:54.842655   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:55.728756   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:58.228520   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:54.528783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.028039   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:59.028234   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:55.907673   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.907858   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.381394   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:57.394078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:57.394147   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:57.431241   61989 cri.go:89] found id: ""
	I0924 01:06:57.431266   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.431278   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:57.431284   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:57.431352   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:57.468954   61989 cri.go:89] found id: ""
	I0924 01:06:57.468983   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.468994   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:57.469001   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:57.469060   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:57.503518   61989 cri.go:89] found id: ""
	I0924 01:06:57.503550   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.503562   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:57.503570   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:57.503618   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:57.540432   61989 cri.go:89] found id: ""
	I0924 01:06:57.540464   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.540475   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:57.540483   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:57.540548   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:57.574142   61989 cri.go:89] found id: ""
	I0924 01:06:57.574175   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.574187   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:57.574195   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:57.574264   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:57.608505   61989 cri.go:89] found id: ""
	I0924 01:06:57.608528   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.608537   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:57.608543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:57.608589   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:57.644273   61989 cri.go:89] found id: ""
	I0924 01:06:57.644305   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.644317   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:57.644344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:57.644409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:57.682023   61989 cri.go:89] found id: ""
	I0924 01:06:57.682050   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.682060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:57.682072   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:57.682086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:57.732537   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:57.732570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:57.746632   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:57.746663   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:57.813904   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:57.813927   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:57.813947   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:57.891947   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:57.891992   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.432035   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:00.444886   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:00.444966   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:00.482653   61989 cri.go:89] found id: ""
	I0924 01:07:00.482683   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.482694   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:00.482702   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:00.482754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:00.516404   61989 cri.go:89] found id: ""
	I0924 01:07:00.516441   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.516452   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:00.516463   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:00.516527   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:00.552938   61989 cri.go:89] found id: ""
	I0924 01:07:00.552963   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.552971   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:00.552977   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:00.553043   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:00.589143   61989 cri.go:89] found id: ""
	I0924 01:07:00.589170   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.589178   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:00.589184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:00.589235   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:00.625023   61989 cri.go:89] found id: ""
	I0924 01:07:00.625047   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.625059   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:00.625066   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:00.625127   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:00.662904   61989 cri.go:89] found id: ""
	I0924 01:07:00.662936   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.662948   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:00.662959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:00.663022   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:00.702892   61989 cri.go:89] found id: ""
	I0924 01:07:00.702921   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.702932   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:00.702938   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:00.702988   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:00.737010   61989 cri.go:89] found id: ""
	I0924 01:07:00.737039   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.737050   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:00.737061   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:00.737075   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:00.788093   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:00.788132   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:00.801354   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:00.801382   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:00.866830   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:00.866862   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:00.866878   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:00.950034   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:00.950076   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.728279   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.227980   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:01.527849   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.027729   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:00.406445   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:02.407048   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.907569   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.492773   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:03.506158   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:03.506224   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:03.542369   61989 cri.go:89] found id: ""
	I0924 01:07:03.542397   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.542408   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:03.542416   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:03.542473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:03.575019   61989 cri.go:89] found id: ""
	I0924 01:07:03.575046   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.575055   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:03.575060   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:03.575103   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:03.608576   61989 cri.go:89] found id: ""
	I0924 01:07:03.608603   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.608612   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:03.608619   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:03.608684   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:03.642359   61989 cri.go:89] found id: ""
	I0924 01:07:03.642389   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.642400   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:03.642407   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:03.642463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:03.678192   61989 cri.go:89] found id: ""
	I0924 01:07:03.678216   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.678223   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:03.678229   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:03.678285   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:03.711773   61989 cri.go:89] found id: ""
	I0924 01:07:03.711795   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.711803   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:03.711809   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:03.711856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:03.747792   61989 cri.go:89] found id: ""
	I0924 01:07:03.747819   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.747830   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:03.747838   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:03.747901   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:03.783284   61989 cri.go:89] found id: ""
	I0924 01:07:03.783312   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.783320   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:03.783331   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:03.783349   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:03.838704   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:03.838745   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:03.852650   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:03.852675   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:03.922474   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:03.922499   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:03.922511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:03.997349   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:03.997388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:05.228357   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:07.228789   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.028604   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:08.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.908041   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:09.406803   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.537182   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:06.549745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:06.549833   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:06.587879   61989 cri.go:89] found id: ""
	I0924 01:07:06.587910   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.587922   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:06.587930   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:06.587984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:06.623419   61989 cri.go:89] found id: ""
	I0924 01:07:06.623447   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.623456   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:06.623462   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:06.623542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:06.659228   61989 cri.go:89] found id: ""
	I0924 01:07:06.659260   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.659272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:06.659280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:06.659341   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:06.693300   61989 cri.go:89] found id: ""
	I0924 01:07:06.693330   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.693341   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:06.693349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:06.693399   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:06.726237   61989 cri.go:89] found id: ""
	I0924 01:07:06.726267   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.726278   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:06.726286   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:06.726342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:06.760627   61989 cri.go:89] found id: ""
	I0924 01:07:06.760659   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.760670   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:06.760677   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:06.760745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:06.796029   61989 cri.go:89] found id: ""
	I0924 01:07:06.796062   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.796073   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:06.796081   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:06.796136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:06.830197   61989 cri.go:89] found id: ""
	I0924 01:07:06.830230   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.830241   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:06.830251   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:06.830265   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.869055   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:06.869087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:06.923840   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:06.923888   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:06.937510   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:06.937549   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:07.011461   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:07.011482   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:07.011496   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:09.591186   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:09.603900   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:09.603970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:09.639003   61989 cri.go:89] found id: ""
	I0924 01:07:09.639035   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.639046   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:09.639055   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:09.639111   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:09.676494   61989 cri.go:89] found id: ""
	I0924 01:07:09.676528   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.676539   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:09.676547   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:09.676616   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:09.713080   61989 cri.go:89] found id: ""
	I0924 01:07:09.713103   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.713111   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:09.713117   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:09.713174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:09.748425   61989 cri.go:89] found id: ""
	I0924 01:07:09.748449   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.748458   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:09.748465   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:09.748521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:09.782526   61989 cri.go:89] found id: ""
	I0924 01:07:09.782559   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.782576   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:09.782584   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:09.782647   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:09.819137   61989 cri.go:89] found id: ""
	I0924 01:07:09.819159   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.819167   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:09.819173   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:09.819256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:09.852953   61989 cri.go:89] found id: ""
	I0924 01:07:09.852976   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.852984   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:09.852989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:09.853083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:09.887254   61989 cri.go:89] found id: ""
	I0924 01:07:09.887282   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.887293   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:09.887304   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:09.887318   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:09.940029   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:09.940069   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:09.954298   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:09.954331   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:10.028926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:10.028947   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:10.028957   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:10.116722   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:10.116761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:09.728996   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.228342   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:10.527637   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.528324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:11.410452   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:13.906451   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.654245   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:12.668635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:12.668695   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:12.711575   61989 cri.go:89] found id: ""
	I0924 01:07:12.711601   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.711626   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:12.711632   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:12.711682   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:12.746104   61989 cri.go:89] found id: ""
	I0924 01:07:12.746131   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.746141   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:12.746149   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:12.746210   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:12.780229   61989 cri.go:89] found id: ""
	I0924 01:07:12.780260   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.780295   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:12.780303   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:12.780384   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:12.812968   61989 cri.go:89] found id: ""
	I0924 01:07:12.812998   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.813010   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:12.813024   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:12.813090   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:12.844212   61989 cri.go:89] found id: ""
	I0924 01:07:12.844241   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.844253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:12.844260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:12.844343   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:12.878662   61989 cri.go:89] found id: ""
	I0924 01:07:12.878690   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.878700   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:12.878707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:12.878765   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:12.912782   61989 cri.go:89] found id: ""
	I0924 01:07:12.912805   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.912815   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:12.912822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:12.912883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:12.945694   61989 cri.go:89] found id: ""
	I0924 01:07:12.945726   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.945736   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:12.945747   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:12.945761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:12.994841   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:12.994877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:13.009582   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:13.009624   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:13.081972   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:13.081999   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:13.082017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:13.162383   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:13.162420   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:15.704586   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:15.717608   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:15.717677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:15.751794   61989 cri.go:89] found id: ""
	I0924 01:07:15.751829   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.751840   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:15.751848   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:15.751916   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:15.791691   61989 cri.go:89] found id: ""
	I0924 01:07:15.791723   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.791734   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:15.791742   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:15.791805   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:15.827934   61989 cri.go:89] found id: ""
	I0924 01:07:15.827957   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.827965   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:15.827971   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:15.828017   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:15.862489   61989 cri.go:89] found id: ""
	I0924 01:07:15.862518   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.862527   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:15.862532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:15.862577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:15.896754   61989 cri.go:89] found id: ""
	I0924 01:07:15.896786   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.896798   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:15.896804   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:15.896857   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:15.934353   61989 cri.go:89] found id: ""
	I0924 01:07:15.934378   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.934386   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:15.934392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:15.934436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:15.969204   61989 cri.go:89] found id: ""
	I0924 01:07:15.969237   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.969246   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:15.969251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:15.969309   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:14.228949   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.728382   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.027681   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:17.027847   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.907872   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:18.407563   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.008733   61989 cri.go:89] found id: ""
	I0924 01:07:16.008767   61989 logs.go:276] 0 containers: []
	W0924 01:07:16.008780   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:16.008792   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:16.008807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:16.046993   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:16.047024   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:16.098768   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:16.098801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:16.114429   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:16.114472   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:16.187450   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:16.187469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:16.187489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:18.767042   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:18.779825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:18.779899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:18.815410   61989 cri.go:89] found id: ""
	I0924 01:07:18.815436   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.815447   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:18.815454   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:18.815523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:18.849837   61989 cri.go:89] found id: ""
	I0924 01:07:18.849862   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.849872   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:18.849880   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:18.849952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:18.885183   61989 cri.go:89] found id: ""
	I0924 01:07:18.885215   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.885227   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:18.885235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:18.885314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:18.922263   61989 cri.go:89] found id: ""
	I0924 01:07:18.922293   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.922305   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:18.922312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:18.922378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:18.957235   61989 cri.go:89] found id: ""
	I0924 01:07:18.957263   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.957272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:18.957278   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:18.957331   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:18.989846   61989 cri.go:89] found id: ""
	I0924 01:07:18.989870   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.989878   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:18.989884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:18.989931   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:19.027264   61989 cri.go:89] found id: ""
	I0924 01:07:19.027298   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.027308   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:19.027315   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:19.027373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:19.065902   61989 cri.go:89] found id: ""
	I0924 01:07:19.065925   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.065934   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:19.065944   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:19.065959   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:19.115515   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:19.115550   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:19.129761   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:19.129787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:19.200299   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:19.200319   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:19.200351   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:19.282308   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:19.282360   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:18.732314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.227773   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.228957   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:19.528117   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:22.028965   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:20.906860   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.407404   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.819442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:21.834106   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:21.834165   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:21.866953   61989 cri.go:89] found id: ""
	I0924 01:07:21.866988   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.866999   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:21.867008   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:21.867085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:21.902561   61989 cri.go:89] found id: ""
	I0924 01:07:21.902637   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.902654   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:21.902663   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:21.902729   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:21.936883   61989 cri.go:89] found id: ""
	I0924 01:07:21.936926   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.936937   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:21.936943   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:21.936995   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:21.975375   61989 cri.go:89] found id: ""
	I0924 01:07:21.975402   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.975411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:21.975417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:21.975465   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:22.012782   61989 cri.go:89] found id: ""
	I0924 01:07:22.012811   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.012822   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:22.012830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:22.012890   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:22.049344   61989 cri.go:89] found id: ""
	I0924 01:07:22.049370   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.049379   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:22.049385   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:22.049442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:22.088187   61989 cri.go:89] found id: ""
	I0924 01:07:22.088219   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.088230   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:22.088239   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:22.088324   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:22.123357   61989 cri.go:89] found id: ""
	I0924 01:07:22.123386   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.123397   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:22.123408   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:22.123423   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:22.176794   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:22.176828   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:22.192550   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:22.192591   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:22.263854   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:22.263881   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:22.263898   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:22.341735   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:22.341778   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:24.879834   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:24.892429   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:24.892504   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:24.926600   61989 cri.go:89] found id: ""
	I0924 01:07:24.926629   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.926636   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:24.926642   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:24.926689   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:24.960370   61989 cri.go:89] found id: ""
	I0924 01:07:24.960399   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.960408   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:24.960415   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:24.960471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:24.993503   61989 cri.go:89] found id: ""
	I0924 01:07:24.993532   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.993542   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:24.993549   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:24.993611   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:25.028027   61989 cri.go:89] found id: ""
	I0924 01:07:25.028055   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.028065   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:25.028073   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:25.028129   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:25.062947   61989 cri.go:89] found id: ""
	I0924 01:07:25.062981   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.062999   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:25.063009   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:25.063077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:25.098895   61989 cri.go:89] found id: ""
	I0924 01:07:25.098927   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.098939   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:25.098946   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:25.098996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:25.132786   61989 cri.go:89] found id: ""
	I0924 01:07:25.132814   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.132824   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:25.132830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:25.132882   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:25.167603   61989 cri.go:89] found id: ""
	I0924 01:07:25.167634   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.167644   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:25.167656   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:25.167671   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:25.220265   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:25.220303   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:25.234840   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:25.234884   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:25.307459   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:25.307485   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:25.307499   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:25.386496   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:25.386537   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:25.229188   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.728978   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:24.531829   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.027182   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:29.029000   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:25.907018   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:28.406555   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.926064   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:27.939398   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:27.939480   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:27.976184   61989 cri.go:89] found id: ""
	I0924 01:07:27.976215   61989 logs.go:276] 0 containers: []
	W0924 01:07:27.976256   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:27.976265   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:27.976348   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:28.009389   61989 cri.go:89] found id: ""
	I0924 01:07:28.009419   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.009431   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:28.009438   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:28.009501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:28.045562   61989 cri.go:89] found id: ""
	I0924 01:07:28.045594   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.045605   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:28.045613   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:28.045677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:28.085318   61989 cri.go:89] found id: ""
	I0924 01:07:28.085345   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.085357   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:28.085364   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:28.085419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:28.119582   61989 cri.go:89] found id: ""
	I0924 01:07:28.119607   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.119617   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:28.119626   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:28.119690   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:28.151445   61989 cri.go:89] found id: ""
	I0924 01:07:28.151493   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.151505   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:28.151513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:28.151578   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:28.185966   61989 cri.go:89] found id: ""
	I0924 01:07:28.185997   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.186009   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:28.186016   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:28.186078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:28.219012   61989 cri.go:89] found id: ""
	I0924 01:07:28.219037   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.219044   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:28.219052   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:28.219089   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:28.272186   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:28.272222   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:28.286346   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:28.286383   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:28.370949   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:28.370975   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:28.370985   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:28.453740   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:28.453775   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:30.229141   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.728919   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:31.527080   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.028315   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.407040   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.407075   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.407711   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.993536   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:31.006297   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:31.006369   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:31.042081   61989 cri.go:89] found id: ""
	I0924 01:07:31.042114   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.042123   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:31.042129   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:31.042185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:31.077119   61989 cri.go:89] found id: ""
	I0924 01:07:31.077144   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.077153   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:31.077159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:31.077208   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:31.110148   61989 cri.go:89] found id: ""
	I0924 01:07:31.110179   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.110187   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:31.110193   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:31.110246   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:31.143551   61989 cri.go:89] found id: ""
	I0924 01:07:31.143578   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.143585   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:31.143591   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:31.143638   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:31.177212   61989 cri.go:89] found id: ""
	I0924 01:07:31.177262   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.177272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:31.177279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:31.177329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:31.209290   61989 cri.go:89] found id: ""
	I0924 01:07:31.209321   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.209332   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:31.209340   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:31.209398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:31.247299   61989 cri.go:89] found id: ""
	I0924 01:07:31.247334   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.247346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:31.247355   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:31.247419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:31.285010   61989 cri.go:89] found id: ""
	I0924 01:07:31.285047   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.285060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:31.285072   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:31.285087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:31.323819   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:31.323855   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:31.378348   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:31.378388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:31.393944   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:31.393983   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:31.464940   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:31.464966   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:31.464978   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.042144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:34.055183   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:34.055268   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:34.103044   61989 cri.go:89] found id: ""
	I0924 01:07:34.103075   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.103086   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:34.103094   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:34.103162   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:34.141379   61989 cri.go:89] found id: ""
	I0924 01:07:34.141412   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.141424   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:34.141432   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:34.141493   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:34.179545   61989 cri.go:89] found id: ""
	I0924 01:07:34.179574   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.179582   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:34.179588   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:34.179655   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:34.217683   61989 cri.go:89] found id: ""
	I0924 01:07:34.217719   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.217739   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:34.217748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:34.217806   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:34.257597   61989 cri.go:89] found id: ""
	I0924 01:07:34.257630   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.257642   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:34.257651   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:34.257723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:34.295410   61989 cri.go:89] found id: ""
	I0924 01:07:34.295440   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.295452   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:34.295460   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:34.295523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:34.331309   61989 cri.go:89] found id: ""
	I0924 01:07:34.331340   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.331350   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:34.331358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:34.331460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:34.367549   61989 cri.go:89] found id: ""
	I0924 01:07:34.367580   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.367590   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:34.367601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:34.367615   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:34.421785   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:34.421823   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:34.435162   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:34.435198   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:34.504051   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:34.504073   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:34.504090   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.582343   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:34.582384   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:35.229391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.229522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.527047   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.527472   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.906974   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.907529   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.124727   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:37.139374   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:37.139431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:37.176474   61989 cri.go:89] found id: ""
	I0924 01:07:37.176500   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.176510   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:37.176515   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:37.176560   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:37.209944   61989 cri.go:89] found id: ""
	I0924 01:07:37.209971   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.209983   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:37.209990   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:37.210055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:37.242894   61989 cri.go:89] found id: ""
	I0924 01:07:37.242923   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.242933   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:37.242941   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:37.242996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:37.276517   61989 cri.go:89] found id: ""
	I0924 01:07:37.276547   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.276558   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:37.276566   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:37.276626   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:37.310169   61989 cri.go:89] found id: ""
	I0924 01:07:37.310196   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.310207   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:37.310214   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:37.310282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:37.342992   61989 cri.go:89] found id: ""
	I0924 01:07:37.343019   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.343027   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:37.343035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:37.343088   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:37.375024   61989 cri.go:89] found id: ""
	I0924 01:07:37.375051   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.375062   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:37.375069   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:37.375137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:37.409736   61989 cri.go:89] found id: ""
	I0924 01:07:37.409761   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.409768   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:37.409776   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:37.409787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:37.474744   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:37.474767   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:37.474783   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:37.551479   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:37.551515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.590597   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:37.590632   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:37.642781   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:37.642820   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.156480   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:40.171002   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:40.171079   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:40.207383   61989 cri.go:89] found id: ""
	I0924 01:07:40.207410   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.207418   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:40.207424   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:40.207474   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:40.245535   61989 cri.go:89] found id: ""
	I0924 01:07:40.245560   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.245568   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:40.245574   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:40.245620   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:40.283858   61989 cri.go:89] found id: ""
	I0924 01:07:40.283888   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.283900   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:40.283909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:40.283982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:40.320527   61989 cri.go:89] found id: ""
	I0924 01:07:40.320555   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.320566   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:40.320575   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:40.320633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:40.354364   61989 cri.go:89] found id: ""
	I0924 01:07:40.354390   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.354397   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:40.354403   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:40.354473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:40.388407   61989 cri.go:89] found id: ""
	I0924 01:07:40.388431   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.388439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:40.388444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:40.388512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:40.423809   61989 cri.go:89] found id: ""
	I0924 01:07:40.423838   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.423847   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:40.423853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:40.423908   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:40.459160   61989 cri.go:89] found id: ""
	I0924 01:07:40.459188   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.459199   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:40.459210   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:40.459223   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:40.530418   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:40.530456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.551644   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:40.551683   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:40.634564   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:40.634587   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:40.634599   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:40.717897   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:40.717934   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:39.728642   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.728725   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:40.528294   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.028364   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.406835   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.907015   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.257992   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:43.272134   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:43.272204   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:43.306747   61989 cri.go:89] found id: ""
	I0924 01:07:43.306775   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.306797   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:43.306806   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:43.306923   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:43.342922   61989 cri.go:89] found id: ""
	I0924 01:07:43.342954   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.342963   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:43.342974   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:43.343028   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:43.378666   61989 cri.go:89] found id: ""
	I0924 01:07:43.378694   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.378703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:43.378709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:43.378760   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:43.414348   61989 cri.go:89] found id: ""
	I0924 01:07:43.414376   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.414387   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:43.414395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:43.414457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:43.447687   61989 cri.go:89] found id: ""
	I0924 01:07:43.447718   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.447728   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:43.447735   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:43.447804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:43.482166   61989 cri.go:89] found id: ""
	I0924 01:07:43.482195   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.482205   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:43.482211   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:43.482275   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:43.518112   61989 cri.go:89] found id: ""
	I0924 01:07:43.518146   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.518159   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:43.518167   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:43.518231   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:43.553853   61989 cri.go:89] found id: ""
	I0924 01:07:43.553875   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.553883   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:43.553891   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:43.553902   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:43.603410   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:43.603445   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:43.616413   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:43.616438   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:43.685077   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:43.685101   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:43.685113   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:43.760758   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:43.760803   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:43.729237   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.228084   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.228503   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:45.527095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:47.529540   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.407150   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.407253   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.300532   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:46.315982   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:46.316050   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:46.356523   61989 cri.go:89] found id: ""
	I0924 01:07:46.356554   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.356565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:46.356573   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:46.356633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:46.405399   61989 cri.go:89] found id: ""
	I0924 01:07:46.405429   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.405439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:46.405447   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:46.405512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:46.454819   61989 cri.go:89] found id: ""
	I0924 01:07:46.454844   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.454853   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:46.454858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:46.454918   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:46.499094   61989 cri.go:89] found id: ""
	I0924 01:07:46.499123   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.499134   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:46.499142   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:46.499196   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:46.532976   61989 cri.go:89] found id: ""
	I0924 01:07:46.533006   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.533017   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:46.533025   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:46.533083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:46.565488   61989 cri.go:89] found id: ""
	I0924 01:07:46.565523   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.565534   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:46.565546   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:46.565610   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:46.598457   61989 cri.go:89] found id: ""
	I0924 01:07:46.598486   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.598496   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:46.598503   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:46.598551   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:46.631892   61989 cri.go:89] found id: ""
	I0924 01:07:46.631920   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.631931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:46.631941   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:46.631956   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:46.709966   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:46.710013   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.749154   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:46.749184   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:46.798192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:46.798228   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:46.811902   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:46.811951   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:46.885878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.386775   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:49.399324   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:49.399383   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:49.437061   61989 cri.go:89] found id: ""
	I0924 01:07:49.437092   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.437104   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:49.437111   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:49.437160   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:49.470882   61989 cri.go:89] found id: ""
	I0924 01:07:49.470908   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.470919   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:49.470927   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:49.470989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:49.506894   61989 cri.go:89] found id: ""
	I0924 01:07:49.506926   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.506938   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:49.506947   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:49.507018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:49.540768   61989 cri.go:89] found id: ""
	I0924 01:07:49.540800   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.540813   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:49.540822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:49.540888   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:49.576486   61989 cri.go:89] found id: ""
	I0924 01:07:49.576515   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.576523   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:49.576530   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:49.576579   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:49.612456   61989 cri.go:89] found id: ""
	I0924 01:07:49.612479   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.612487   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:49.612495   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:49.612542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:49.646085   61989 cri.go:89] found id: ""
	I0924 01:07:49.646118   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.646127   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:49.646132   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:49.646178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:49.682538   61989 cri.go:89] found id: ""
	I0924 01:07:49.682565   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.682574   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:49.682583   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:49.682594   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:49.721791   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:49.721817   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:49.774842   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:49.774889   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:49.789082   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:49.789129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:49.866437   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.866464   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:49.866478   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:50.727581   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.027396   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.028176   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.407654   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.908118   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.445166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:52.459060   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:52.459126   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:52.496521   61989 cri.go:89] found id: ""
	I0924 01:07:52.496550   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.496562   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:52.496571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:52.496652   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:52.533575   61989 cri.go:89] found id: ""
	I0924 01:07:52.533600   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.533608   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:52.533615   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:52.533693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:52.571666   61989 cri.go:89] found id: ""
	I0924 01:07:52.571693   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.571703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:52.571710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:52.571758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:52.603929   61989 cri.go:89] found id: ""
	I0924 01:07:52.603957   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.603968   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:52.603976   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:52.604034   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:52.635581   61989 cri.go:89] found id: ""
	I0924 01:07:52.635607   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.635614   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:52.635620   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:52.635669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:52.673865   61989 cri.go:89] found id: ""
	I0924 01:07:52.673889   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.673897   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:52.673903   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:52.673953   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:52.709885   61989 cri.go:89] found id: ""
	I0924 01:07:52.709910   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.709918   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:52.709925   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:52.709986   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:52.746409   61989 cri.go:89] found id: ""
	I0924 01:07:52.746439   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.746450   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:52.746461   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:52.746475   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:52.798020   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:52.798054   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:52.811940   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:52.811967   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:52.888091   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:52.888114   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:52.888129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.968955   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:52.969000   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:55.507204   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:55.520581   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:55.520657   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:55.555772   61989 cri.go:89] found id: ""
	I0924 01:07:55.555809   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.555821   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:55.555828   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:55.555880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:55.593765   61989 cri.go:89] found id: ""
	I0924 01:07:55.593791   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.593802   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:55.593808   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:55.593866   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:55.630292   61989 cri.go:89] found id: ""
	I0924 01:07:55.630325   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.630337   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:55.630344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:55.630408   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:55.665703   61989 cri.go:89] found id: ""
	I0924 01:07:55.665730   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.665741   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:55.665748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:55.665807   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:55.701911   61989 cri.go:89] found id: ""
	I0924 01:07:55.701938   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.701949   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:55.701957   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:55.702020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:55.734343   61989 cri.go:89] found id: ""
	I0924 01:07:55.734373   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.734385   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:55.734394   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:55.734460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:55.768606   61989 cri.go:89] found id: ""
	I0924 01:07:55.768633   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.768645   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:55.768653   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:55.768716   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:55.800720   61989 cri.go:89] found id: ""
	I0924 01:07:55.800747   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.800757   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:55.800768   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:55.800782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:55.851702   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:55.851737   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:55.865657   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:55.865687   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:55.940175   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:55.940197   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:55.940207   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:55.227954   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.228969   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:54.528417   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.529326   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:59.027653   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:55.407038   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.906886   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.015832   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:56.015870   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:58.557571   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:58.572208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:58.572274   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:58.605081   61989 cri.go:89] found id: ""
	I0924 01:07:58.605109   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.605121   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:58.605128   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:58.605185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:58.641518   61989 cri.go:89] found id: ""
	I0924 01:07:58.641548   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.641559   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:58.641566   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:58.641617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:58.680623   61989 cri.go:89] found id: ""
	I0924 01:07:58.680653   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.680664   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:58.680675   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:58.680735   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:58.713658   61989 cri.go:89] found id: ""
	I0924 01:07:58.713684   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.713693   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:58.713700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:58.713754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:58.746264   61989 cri.go:89] found id: ""
	I0924 01:07:58.746298   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.746307   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:58.746313   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:58.746358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:58.779812   61989 cri.go:89] found id: ""
	I0924 01:07:58.779846   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.779912   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:58.779924   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:58.779984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:58.813203   61989 cri.go:89] found id: ""
	I0924 01:07:58.813236   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.813245   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:58.813252   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:58.813303   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:58.845872   61989 cri.go:89] found id: ""
	I0924 01:07:58.845898   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.845906   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:58.845915   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:58.845925   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:58.897480   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:58.897515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:58.912904   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:58.912936   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:58.982882   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:58.982908   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:58.982921   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:59.058495   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:59.058535   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:59.729215   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.228358   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.028678   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:03.527682   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:00.407897   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.907608   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:04.907717   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.596672   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:01.609550   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:01.609625   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:01.648819   61989 cri.go:89] found id: ""
	I0924 01:08:01.648847   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.648857   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:01.648864   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:01.649000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:01.685419   61989 cri.go:89] found id: ""
	I0924 01:08:01.685450   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.685458   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:01.685464   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:01.685533   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:01.720426   61989 cri.go:89] found id: ""
	I0924 01:08:01.720455   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.720464   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:01.720473   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:01.720537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:01.755292   61989 cri.go:89] found id: ""
	I0924 01:08:01.755316   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.755324   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:01.755331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:01.755398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:01.788673   61989 cri.go:89] found id: ""
	I0924 01:08:01.788703   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.788713   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:01.788721   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:01.788789   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:01.824724   61989 cri.go:89] found id: ""
	I0924 01:08:01.824761   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.824773   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:01.824781   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:01.824838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:01.858492   61989 cri.go:89] found id: ""
	I0924 01:08:01.858531   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.858542   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:01.858556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:01.858623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:01.892135   61989 cri.go:89] found id: ""
	I0924 01:08:01.892167   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.892177   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:01.892192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:01.892205   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:01.905820   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:01.905849   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:01.977998   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:01.978026   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:01.978039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:02.060441   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:02.060480   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:02.100029   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:02.100057   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.653124   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:04.665726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:04.665784   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:04.700755   61989 cri.go:89] found id: ""
	I0924 01:08:04.700785   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.700796   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:04.700804   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:04.700858   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:04.736955   61989 cri.go:89] found id: ""
	I0924 01:08:04.736983   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.736992   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:04.736998   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:04.737051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:04.770940   61989 cri.go:89] found id: ""
	I0924 01:08:04.770969   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.770977   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:04.770983   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:04.771051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:04.805376   61989 cri.go:89] found id: ""
	I0924 01:08:04.805403   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.805411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:04.805417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:04.805471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:04.840995   61989 cri.go:89] found id: ""
	I0924 01:08:04.841016   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.841024   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:04.841030   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:04.841077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:04.875418   61989 cri.go:89] found id: ""
	I0924 01:08:04.875449   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.875460   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:04.875468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:04.875546   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:04.910675   61989 cri.go:89] found id: ""
	I0924 01:08:04.910696   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.910704   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:04.910710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:04.910764   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:04.945531   61989 cri.go:89] found id: ""
	I0924 01:08:04.945562   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.945570   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:04.945578   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:04.945589   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.997696   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:04.997734   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:05.011296   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:05.011329   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:05.087878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:05.087905   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:05.087919   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:05.164073   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:05.164111   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:04.228985   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.734525   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.031377   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:08.528160   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.908017   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:09.407255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:07.713496   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:07.726590   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:07.726649   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:07.760050   61989 cri.go:89] found id: ""
	I0924 01:08:07.760081   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.760092   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:07.760100   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:07.760152   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:07.797709   61989 cri.go:89] found id: ""
	I0924 01:08:07.797736   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.797744   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:07.797749   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:07.797803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:07.836351   61989 cri.go:89] found id: ""
	I0924 01:08:07.836380   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.836391   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:07.836399   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:07.836471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:07.871133   61989 cri.go:89] found id: ""
	I0924 01:08:07.871159   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.871170   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:07.871178   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:07.871229   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:07.906640   61989 cri.go:89] found id: ""
	I0924 01:08:07.906663   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.906673   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:07.906682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:07.906741   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:07.940919   61989 cri.go:89] found id: ""
	I0924 01:08:07.940945   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.940953   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:07.940959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:07.941018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:07.975533   61989 cri.go:89] found id: ""
	I0924 01:08:07.975562   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.975570   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:07.975576   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:07.975627   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:08.009137   61989 cri.go:89] found id: ""
	I0924 01:08:08.009163   61989 logs.go:276] 0 containers: []
	W0924 01:08:08.009173   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:08.009183   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:08.009196   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:08.065199   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:08.065252   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:08.080159   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:08.080188   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:08.154003   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:08.154025   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:08.154039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:08.235522   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:08.235561   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:10.774666   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:10.787704   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:10.787775   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:10.822721   61989 cri.go:89] found id: ""
	I0924 01:08:10.822759   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.822770   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:10.822781   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:10.822852   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:10.857113   61989 cri.go:89] found id: ""
	I0924 01:08:10.857138   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.857146   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:10.857152   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:10.857201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:10.890974   61989 cri.go:89] found id: ""
	I0924 01:08:10.891001   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.891012   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:10.891020   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:10.891086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:10.929771   61989 cri.go:89] found id: ""
	I0924 01:08:10.929793   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.929800   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:10.929806   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:10.929851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:10.961988   61989 cri.go:89] found id: ""
	I0924 01:08:10.962015   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.962027   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:10.962035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:10.962100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:09.228600   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.729142   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.528626   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.027656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.906981   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.907232   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.993591   61989 cri.go:89] found id: ""
	I0924 01:08:10.993622   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.993633   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:10.993639   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:10.993691   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:11.032468   61989 cri.go:89] found id: ""
	I0924 01:08:11.032496   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.032506   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:11.032514   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:11.032576   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:11.066900   61989 cri.go:89] found id: ""
	I0924 01:08:11.066924   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.066931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:11.066939   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:11.066950   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:11.136412   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:11.136443   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:11.136459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:11.218326   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:11.218361   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:11.260695   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:11.260728   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:11.310102   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:11.310133   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:13.825540   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:13.838208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:13.838283   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:13.874539   61989 cri.go:89] found id: ""
	I0924 01:08:13.874567   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.874576   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:13.874581   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:13.874628   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:13.911818   61989 cri.go:89] found id: ""
	I0924 01:08:13.911839   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.911846   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:13.911852   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:13.911897   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:13.944766   61989 cri.go:89] found id: ""
	I0924 01:08:13.944789   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.944797   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:13.944802   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:13.944847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:13.980712   61989 cri.go:89] found id: ""
	I0924 01:08:13.980742   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.980752   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:13.980758   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:13.980817   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:14.016103   61989 cri.go:89] found id: ""
	I0924 01:08:14.016130   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.016138   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:14.016143   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:14.016192   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:14.051884   61989 cri.go:89] found id: ""
	I0924 01:08:14.051929   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.051943   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:14.051954   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:14.052046   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:14.088928   61989 cri.go:89] found id: ""
	I0924 01:08:14.088954   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.088964   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:14.088970   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:14.089020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:14.123057   61989 cri.go:89] found id: ""
	I0924 01:08:14.123083   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.123091   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:14.123099   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:14.123112   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:14.174249   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:14.174287   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:14.188409   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:14.188442   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:14.258906   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:14.258932   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:14.258942   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:14.340891   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:14.340928   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:14.229459   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:16.728316   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.028158   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.527615   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.907490   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.907845   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.901512   61323 pod_ready.go:82] duration metric: took 4m0.001092501s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:19.901552   61323 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:08:19.901576   61323 pod_ready.go:39] duration metric: took 4m10.04955154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:19.901606   61323 kubeadm.go:597] duration metric: took 4m18.184472182s to restartPrimaryControlPlane
	W0924 01:08:19.901701   61323 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:19.901736   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:16.877728   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:16.890548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:16.890617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:16.924414   61989 cri.go:89] found id: ""
	I0924 01:08:16.924439   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.924451   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:16.924458   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:16.924510   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:16.960295   61989 cri.go:89] found id: ""
	I0924 01:08:16.960323   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.960344   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:16.960352   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:16.960405   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:16.993171   61989 cri.go:89] found id: ""
	I0924 01:08:16.993204   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.993216   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:16.993224   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:16.993287   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:17.028122   61989 cri.go:89] found id: ""
	I0924 01:08:17.028150   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.028160   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:17.028169   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:17.028261   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:17.068401   61989 cri.go:89] found id: ""
	I0924 01:08:17.068440   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.068451   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:17.068458   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:17.068530   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:17.104250   61989 cri.go:89] found id: ""
	I0924 01:08:17.104275   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.104283   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:17.104299   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:17.104370   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:17.139178   61989 cri.go:89] found id: ""
	I0924 01:08:17.139201   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.139209   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:17.139215   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:17.139288   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:17.172677   61989 cri.go:89] found id: ""
	I0924 01:08:17.172703   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.172712   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:17.172727   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:17.172742   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:17.222039   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:17.222082   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:17.235342   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:17.235371   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:17.300313   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:17.300350   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:17.300366   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:17.382465   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:17.382517   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.924928   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:19.941406   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:19.941496   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:19.976196   61989 cri.go:89] found id: ""
	I0924 01:08:19.976224   61989 logs.go:276] 0 containers: []
	W0924 01:08:19.976238   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:19.976247   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:19.976314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:20.019652   61989 cri.go:89] found id: ""
	I0924 01:08:20.019680   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.019692   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:20.019699   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:20.019757   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:20.055098   61989 cri.go:89] found id: ""
	I0924 01:08:20.055123   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.055130   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:20.055135   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:20.055183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:20.091428   61989 cri.go:89] found id: ""
	I0924 01:08:20.091458   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.091469   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:20.091476   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:20.091532   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:20.123608   61989 cri.go:89] found id: ""
	I0924 01:08:20.123642   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.123653   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:20.123678   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:20.123745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:20.165885   61989 cri.go:89] found id: ""
	I0924 01:08:20.165913   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.165926   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:20.165934   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:20.165985   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:20.199300   61989 cri.go:89] found id: ""
	I0924 01:08:20.199329   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.199341   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:20.199348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:20.199415   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:20.237201   61989 cri.go:89] found id: ""
	I0924 01:08:20.237253   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.237262   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:20.237271   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:20.237284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:20.285008   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:20.285049   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:20.298974   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:20.299014   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:20.385765   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:20.385793   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:20.385807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:20.460715   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:20.460752   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.227947   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.228448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.229022   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.527785   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.528095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.528420   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.000163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:23.014755   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:23.014828   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:23.048877   61989 cri.go:89] found id: ""
	I0924 01:08:23.048909   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.048921   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:23.048979   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:23.049049   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:23.085614   61989 cri.go:89] found id: ""
	I0924 01:08:23.085643   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.085650   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:23.085658   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:23.085718   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:23.122027   61989 cri.go:89] found id: ""
	I0924 01:08:23.122060   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.122071   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:23.122078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:23.122136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:23.156838   61989 cri.go:89] found id: ""
	I0924 01:08:23.156868   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.156879   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:23.156887   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:23.156947   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:23.191528   61989 cri.go:89] found id: ""
	I0924 01:08:23.191569   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.191579   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:23.191586   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:23.191651   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:23.227627   61989 cri.go:89] found id: ""
	I0924 01:08:23.227651   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.227659   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:23.227665   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:23.227709   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:23.261937   61989 cri.go:89] found id: ""
	I0924 01:08:23.261968   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.261980   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:23.261988   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:23.262039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:23.297947   61989 cri.go:89] found id: ""
	I0924 01:08:23.297973   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.297986   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:23.297997   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:23.298009   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.337783   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:23.337811   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:23.390767   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:23.390808   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:23.404787   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:23.404814   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:23.478768   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:23.478788   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:23.478801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:25.728154   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.227795   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:25.529710   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.028153   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:26.060593   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:26.085071   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:26.085137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:26.121785   61989 cri.go:89] found id: ""
	I0924 01:08:26.121814   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.121826   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:26.121834   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:26.121900   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:26.167942   61989 cri.go:89] found id: ""
	I0924 01:08:26.167971   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.167980   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:26.167991   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:26.168054   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:26.206461   61989 cri.go:89] found id: ""
	I0924 01:08:26.206496   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.206506   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:26.206513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:26.206582   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:26.243094   61989 cri.go:89] found id: ""
	I0924 01:08:26.243125   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.243136   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:26.243144   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:26.243206   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:26.279303   61989 cri.go:89] found id: ""
	I0924 01:08:26.279331   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.279341   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:26.279348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:26.279407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:26.311840   61989 cri.go:89] found id: ""
	I0924 01:08:26.311869   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.311880   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:26.311888   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:26.311954   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:26.345994   61989 cri.go:89] found id: ""
	I0924 01:08:26.346019   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.346027   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:26.346033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:26.346082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:26.380570   61989 cri.go:89] found id: ""
	I0924 01:08:26.380601   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.380610   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:26.380619   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:26.380630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:26.429958   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:26.429993   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:26.443278   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:26.443312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:26.516353   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:26.516375   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:26.516390   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.603310   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:26.603345   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.142531   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:29.156548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:29.156634   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:29.191351   61989 cri.go:89] found id: ""
	I0924 01:08:29.191378   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.191389   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:29.191396   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:29.191451   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:29.232112   61989 cri.go:89] found id: ""
	I0924 01:08:29.232141   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.232152   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:29.232159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:29.232214   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:29.266082   61989 cri.go:89] found id: ""
	I0924 01:08:29.266104   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.266112   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:29.266118   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:29.266178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:29.299777   61989 cri.go:89] found id: ""
	I0924 01:08:29.299802   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.299812   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:29.299817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:29.299883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:29.342709   61989 cri.go:89] found id: ""
	I0924 01:08:29.342740   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.342749   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:29.342756   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:29.342816   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:29.381255   61989 cri.go:89] found id: ""
	I0924 01:08:29.381303   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.381312   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:29.381318   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:29.381375   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:29.414998   61989 cri.go:89] found id: ""
	I0924 01:08:29.415028   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.415036   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:29.415043   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:29.415101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:29.448553   61989 cri.go:89] found id: ""
	I0924 01:08:29.448580   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.448589   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:29.448598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:29.448608   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:29.534936   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:29.535001   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.573554   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:29.573584   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:29.623590   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:29.623626   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:29.636141   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:29.636167   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:29.700591   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:30.228993   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.229458   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:30.528150   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:33.029011   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.201184   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:32.215034   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:32.215102   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:32.250990   61989 cri.go:89] found id: ""
	I0924 01:08:32.251016   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.251026   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:32.251033   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:32.251104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:32.284448   61989 cri.go:89] found id: ""
	I0924 01:08:32.284483   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.284494   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:32.284504   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:32.284570   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:32.317979   61989 cri.go:89] found id: ""
	I0924 01:08:32.318004   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.318015   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:32.318022   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:32.318078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:32.352057   61989 cri.go:89] found id: ""
	I0924 01:08:32.352082   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.352093   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:32.352101   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:32.352163   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:32.385459   61989 cri.go:89] found id: ""
	I0924 01:08:32.385482   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.385490   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:32.385496   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:32.385544   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:32.421189   61989 cri.go:89] found id: ""
	I0924 01:08:32.421217   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.421227   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:32.421235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:32.421307   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:32.464375   61989 cri.go:89] found id: ""
	I0924 01:08:32.464399   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.464406   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:32.464412   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:32.464457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:32.512716   61989 cri.go:89] found id: ""
	I0924 01:08:32.512742   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.512753   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:32.512763   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:32.512788   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:32.598271   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.598293   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:32.598305   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:32.674197   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:32.674233   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:32.715065   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:32.715092   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:32.767522   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:32.767565   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.281678   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:35.296302   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:35.296390   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:35.336341   61989 cri.go:89] found id: ""
	I0924 01:08:35.336370   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.336381   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:35.336397   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:35.336454   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:35.373090   61989 cri.go:89] found id: ""
	I0924 01:08:35.373118   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.373127   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:35.373135   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:35.373201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:35.413628   61989 cri.go:89] found id: ""
	I0924 01:08:35.413660   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.413668   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:35.413674   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:35.413720   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:35.446564   61989 cri.go:89] found id: ""
	I0924 01:08:35.446592   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.446603   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:35.446610   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:35.446669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:35.478389   61989 cri.go:89] found id: ""
	I0924 01:08:35.478424   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.478435   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:35.478444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:35.478515   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:35.513992   61989 cri.go:89] found id: ""
	I0924 01:08:35.514015   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.514023   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:35.514029   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:35.514085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:35.556442   61989 cri.go:89] found id: ""
	I0924 01:08:35.556471   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.556481   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:35.556489   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:35.556571   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:35.594205   61989 cri.go:89] found id: ""
	I0924 01:08:35.594228   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.594236   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:35.594244   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:35.594254   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:35.637601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:35.637640   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:35.691674   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:35.691711   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.705223   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:35.705261   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:35.784000   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:35.784021   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:35.784036   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:34.729064   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:37.227314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:35.528382   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.028508   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.370232   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:38.383287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:38.383358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:38.417528   61989 cri.go:89] found id: ""
	I0924 01:08:38.417556   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.417564   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:38.417571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:38.417619   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:38.459788   61989 cri.go:89] found id: ""
	I0924 01:08:38.459814   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.459821   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:38.459828   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:38.459883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:38.494017   61989 cri.go:89] found id: ""
	I0924 01:08:38.494050   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.494059   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:38.494065   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:38.494135   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:38.526894   61989 cri.go:89] found id: ""
	I0924 01:08:38.526924   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.526935   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:38.526942   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:38.527000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:38.563831   61989 cri.go:89] found id: ""
	I0924 01:08:38.563859   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.563876   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:38.563884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:38.563950   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:38.596066   61989 cri.go:89] found id: ""
	I0924 01:08:38.596095   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.596106   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:38.596114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:38.596172   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:38.630123   61989 cri.go:89] found id: ""
	I0924 01:08:38.630147   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.630157   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:38.630165   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:38.630223   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:38.664714   61989 cri.go:89] found id: ""
	I0924 01:08:38.664743   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.664754   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:38.664765   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:38.664782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:38.718770   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:38.718802   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:38.732878   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:38.732906   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:38.806441   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:38.806469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:38.806485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.884416   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:38.884456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:39.228048   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.228574   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:40.527354   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:42.528592   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.423782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:41.436827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:41.436899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:41.468283   61989 cri.go:89] found id: ""
	I0924 01:08:41.468316   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.468342   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:41.468353   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:41.468412   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:41.504348   61989 cri.go:89] found id: ""
	I0924 01:08:41.504380   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.504402   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:41.504410   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:41.504470   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:41.544785   61989 cri.go:89] found id: ""
	I0924 01:08:41.544809   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.544818   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:41.544825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:41.544883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:41.582924   61989 cri.go:89] found id: ""
	I0924 01:08:41.582954   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.582965   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:41.582973   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:41.583037   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:41.618220   61989 cri.go:89] found id: ""
	I0924 01:08:41.618243   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.618253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:41.618260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:41.618329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:41.653369   61989 cri.go:89] found id: ""
	I0924 01:08:41.653392   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.653400   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:41.653416   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:41.653477   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:41.687036   61989 cri.go:89] found id: ""
	I0924 01:08:41.687058   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.687069   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:41.687077   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:41.687133   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:41.720701   61989 cri.go:89] found id: ""
	I0924 01:08:41.720732   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.720744   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:41.720756   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:41.720776   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:41.798436   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:41.798486   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.842639   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:41.842674   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:41.893053   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:41.893086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:41.907757   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:41.907784   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:41.973466   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.474071   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:44.487057   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:44.487119   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:44.521772   61989 cri.go:89] found id: ""
	I0924 01:08:44.521813   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.521835   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:44.521843   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:44.521905   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:44.554928   61989 cri.go:89] found id: ""
	I0924 01:08:44.554956   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.554966   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:44.554977   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:44.555042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:44.594246   61989 cri.go:89] found id: ""
	I0924 01:08:44.594279   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.594292   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:44.594298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:44.594344   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:44.629779   61989 cri.go:89] found id: ""
	I0924 01:08:44.629807   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.629819   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:44.629827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:44.629884   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:44.671671   61989 cri.go:89] found id: ""
	I0924 01:08:44.671694   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.671701   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:44.671707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:44.671772   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:44.710875   61989 cri.go:89] found id: ""
	I0924 01:08:44.710910   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.710922   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:44.710931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:44.711000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:44.744345   61989 cri.go:89] found id: ""
	I0924 01:08:44.744381   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.744389   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:44.744395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:44.744442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:44.780771   61989 cri.go:89] found id: ""
	I0924 01:08:44.780797   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.780804   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:44.780812   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:44.780824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:44.834902   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:44.834958   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:44.848503   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:44.848540   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:44.923117   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.923138   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:44.923150   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:45.003806   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:45.003840   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.184585   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.282824063s)
	I0924 01:08:46.184659   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:46.201715   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:46.215637   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:46.228701   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:46.228726   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:46.228769   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:46.239005   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:46.239065   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:46.250336   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:46.259889   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:46.259961   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:46.271773   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.283106   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:46.283175   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.293325   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:46.306026   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:46.306111   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:46.318859   61323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:46.373819   61323 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:08:46.373973   61323 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:46.487006   61323 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:46.487146   61323 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:46.487299   61323 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:08:46.495557   61323 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:46.497537   61323 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:46.497645   61323 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:46.497732   61323 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:46.497853   61323 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:46.497946   61323 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:46.498041   61323 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:46.498116   61323 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:46.498197   61323 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:46.498280   61323 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:46.498389   61323 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:46.498490   61323 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:46.498547   61323 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:46.498623   61323 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:46.714556   61323 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:46.815030   61323 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:08:47.011082   61323 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:47.227052   61323 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:47.488776   61323 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:47.489403   61323 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:47.491864   61323 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:43.728646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:46.234412   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029064   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029109   61699 pod_ready.go:82] duration metric: took 4m0.007887151s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:45.029124   61699 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:08:45.029133   61699 pod_ready.go:39] duration metric: took 4m5.860472412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:45.029153   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:08:45.029189   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:45.029267   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:45.084875   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:45.084899   61699 cri.go:89] found id: ""
	I0924 01:08:45.084907   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:45.084955   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.089534   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:45.089601   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:45.133457   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:45.133479   61699 cri.go:89] found id: ""
	I0924 01:08:45.133486   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:45.133544   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.137523   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:45.137586   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:45.173989   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.174014   61699 cri.go:89] found id: ""
	I0924 01:08:45.174023   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:45.174083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.178084   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:45.178168   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:45.215763   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:45.215790   61699 cri.go:89] found id: ""
	I0924 01:08:45.215799   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:45.215851   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.220052   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:45.220137   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:45.258186   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.258206   61699 cri.go:89] found id: ""
	I0924 01:08:45.258213   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:45.258272   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.262402   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:45.262481   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:45.299355   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.299385   61699 cri.go:89] found id: ""
	I0924 01:08:45.299397   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:45.299452   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.303404   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:45.303505   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:45.341412   61699 cri.go:89] found id: ""
	I0924 01:08:45.341438   61699 logs.go:276] 0 containers: []
	W0924 01:08:45.341446   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:45.341452   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:45.341508   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:45.377419   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:45.377450   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:45.377457   61699 cri.go:89] found id: ""
	I0924 01:08:45.377471   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:45.377539   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.381497   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.385182   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:45.385201   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:45.455618   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:45.455661   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.495007   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:45.495037   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.530196   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:45.530230   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.581671   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:45.581709   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:46.122674   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:46.122717   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.169928   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:46.169965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:46.184617   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:46.184645   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:46.330482   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:46.330512   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:46.382927   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:46.382965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:46.441408   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:46.441442   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:46.484956   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:46.484985   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:46.522559   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:46.522595   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.064954   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:49.086621   61699 api_server.go:72] duration metric: took 4m15.650065328s to wait for apiserver process to appear ...
	I0924 01:08:49.086648   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:08:49.086695   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:49.086760   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:47.541843   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:47.555428   61989 kubeadm.go:597] duration metric: took 4m2.297219084s to restartPrimaryControlPlane
	W0924 01:08:47.555528   61989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:47.555560   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:49.123410   61989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.567825503s)
	I0924 01:08:49.123501   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:49.142686   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:49.154484   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:49.166734   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:49.166759   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:49.166813   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:49.178374   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:49.178517   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:49.188871   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:49.200190   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:49.200258   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:49.212895   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.225205   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:49.225276   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.237828   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:49.249686   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:49.249751   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:49.262505   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:49.338624   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:08:49.338712   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:49.509271   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:49.509489   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:49.509636   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:08:49.724434   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:47.494323   61323 out.go:235]   - Booting up control plane ...
	I0924 01:08:47.494449   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:47.494527   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:47.494904   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:47.511889   61323 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:47.518272   61323 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:47.518343   61323 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:47.654121   61323 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:08:47.654273   61323 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:08:48.156008   61323 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075879ms
	I0924 01:08:48.156089   61323 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:08:49.726458   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:49.726563   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:49.726639   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:49.726737   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:49.726812   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:49.727078   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:49.727375   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:49.728123   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:49.729254   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:49.730178   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:49.732548   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:49.732604   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:49.732676   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:49.938623   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:50.774207   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:51.022535   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:51.148690   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:51.168786   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:51.170070   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:51.170150   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:51.342671   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:48.729168   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:50.729197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:52.729615   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:53.660805   61323 kubeadm.go:310] [api-check] The API server is healthy after 5.502700892s
	I0924 01:08:53.678006   61323 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:08:53.693676   61323 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:08:53.736910   61323 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:08:53.737186   61323 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-650507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:08:53.750738   61323 kubeadm.go:310] [bootstrap-token] Using token: 62empn.zvptxpa69xtioeo1
	I0924 01:08:49.139835   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.139859   61699 cri.go:89] found id: ""
	I0924 01:08:49.139869   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:49.139920   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.144770   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:49.144896   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:49.193710   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:49.193733   61699 cri.go:89] found id: ""
	I0924 01:08:49.193743   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:49.193798   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.198090   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:49.198178   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:49.240236   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:49.240309   61699 cri.go:89] found id: ""
	I0924 01:08:49.240344   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:49.240401   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.244573   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:49.244646   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:49.290954   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:49.290998   61699 cri.go:89] found id: ""
	I0924 01:08:49.291008   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:49.291083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.295602   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:49.295664   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:49.340871   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.340894   61699 cri.go:89] found id: ""
	I0924 01:08:49.340904   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:49.340964   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.345362   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:49.345433   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:49.387382   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.387408   61699 cri.go:89] found id: ""
	I0924 01:08:49.387418   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:49.387472   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.393388   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:49.393468   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:49.436082   61699 cri.go:89] found id: ""
	I0924 01:08:49.436107   61699 logs.go:276] 0 containers: []
	W0924 01:08:49.436119   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:49.436126   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:49.436187   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:49.490172   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:49.490197   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.490203   61699 cri.go:89] found id: ""
	I0924 01:08:49.490213   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:49.490273   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.495438   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.500506   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:49.500537   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.561240   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:49.561277   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.611765   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:49.611807   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.689366   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:49.689413   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:49.747233   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:49.747271   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:49.852723   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:49.852771   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:50.006274   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:50.006322   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:50.064786   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:50.064828   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:50.104831   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:50.104865   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:50.144962   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:50.144990   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:50.183923   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:50.183956   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:50.222382   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:50.222414   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:50.671849   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:50.671890   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.187450   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:08:53.193094   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:08:53.194414   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:08:53.194439   61699 api_server.go:131] duration metric: took 4.107783011s to wait for apiserver health ...
	I0924 01:08:53.194449   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:08:53.194479   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:53.194546   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:53.232560   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:53.232584   61699 cri.go:89] found id: ""
	I0924 01:08:53.232594   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:53.232649   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.236611   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:53.236671   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:53.278194   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.278223   61699 cri.go:89] found id: ""
	I0924 01:08:53.278233   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:53.278291   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.283330   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:53.283391   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:53.322371   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.322399   61699 cri.go:89] found id: ""
	I0924 01:08:53.322408   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:53.322459   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.326794   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:53.326869   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:53.364035   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.364064   61699 cri.go:89] found id: ""
	I0924 01:08:53.364075   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:53.364140   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.368065   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:53.368127   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:53.405651   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.405679   61699 cri.go:89] found id: ""
	I0924 01:08:53.405687   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:53.405754   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.410451   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:53.410537   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:53.451079   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:53.451111   61699 cri.go:89] found id: ""
	I0924 01:08:53.451121   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:53.451183   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.456272   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:53.456367   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:53.497323   61699 cri.go:89] found id: ""
	I0924 01:08:53.497360   61699 logs.go:276] 0 containers: []
	W0924 01:08:53.497373   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:53.497387   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:53.497461   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:53.536017   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:53.536040   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:53.536046   61699 cri.go:89] found id: ""
	I0924 01:08:53.536055   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:53.536114   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.542413   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.546559   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:53.546592   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.560292   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:53.560323   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:53.684820   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:53.684849   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.734483   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:53.734519   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.780676   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:53.780705   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:53.855917   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:53.855960   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.906926   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:53.906962   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.953992   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:53.954019   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:54.031302   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:54.031350   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:54.073918   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:54.073958   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:54.108724   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:54.108765   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:53.752460   61323 out.go:235]   - Configuring RBAC rules ...
	I0924 01:08:53.752626   61323 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:08:53.758889   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:08:53.767101   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:08:53.770943   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:08:53.775335   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:08:53.792963   61323 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:08:54.070193   61323 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:08:54.526226   61323 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:08:55.069668   61323 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:08:55.070678   61323 kubeadm.go:310] 
	I0924 01:08:55.070751   61323 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:08:55.070761   61323 kubeadm.go:310] 
	I0924 01:08:55.070844   61323 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:08:55.070860   61323 kubeadm.go:310] 
	I0924 01:08:55.070910   61323 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:08:55.070998   61323 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:08:55.071064   61323 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:08:55.071074   61323 kubeadm.go:310] 
	I0924 01:08:55.071138   61323 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:08:55.071159   61323 kubeadm.go:310] 
	I0924 01:08:55.071210   61323 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:08:55.071217   61323 kubeadm.go:310] 
	I0924 01:08:55.071298   61323 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:08:55.071428   61323 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:08:55.071525   61323 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:08:55.071535   61323 kubeadm.go:310] 
	I0924 01:08:55.071640   61323 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:08:55.071721   61323 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:08:55.071738   61323 kubeadm.go:310] 
	I0924 01:08:55.071815   61323 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.071941   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:08:55.071971   61323 kubeadm.go:310] 	--control-plane 
	I0924 01:08:55.071984   61323 kubeadm.go:310] 
	I0924 01:08:55.072089   61323 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:08:55.072098   61323 kubeadm.go:310] 
	I0924 01:08:55.072198   61323 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.072324   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:08:55.073807   61323 kubeadm.go:310] W0924 01:08:46.350959    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074118   61323 kubeadm.go:310] W0924 01:08:46.352529    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074256   61323 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:08:55.074295   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:08:55.074312   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:08:55.076241   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:08:55.077630   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:08:55.088658   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:08:55.106396   61323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:08:55.106491   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.106579   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-650507 minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=embed-certs-650507 minikube.k8s.io/primary=true
	I0924 01:08:55.138376   61323 ops.go:34] apiserver oom_adj: -16
	I0924 01:08:51.344458   61989 out.go:235]   - Booting up control plane ...
	I0924 01:08:51.344607   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:51.353468   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:51.356949   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:51.358082   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:51.364468   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:08:54.501805   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:54.501847   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:54.548768   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:54.548800   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:57.105661   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:08:57.105688   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.105693   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.105697   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.105703   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.105706   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.105709   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.105715   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.105722   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.105729   61699 system_pods.go:74] duration metric: took 3.911274774s to wait for pod list to return data ...
	I0924 01:08:57.105736   61699 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:08:57.108031   61699 default_sa.go:45] found service account: "default"
	I0924 01:08:57.108051   61699 default_sa.go:55] duration metric: took 2.307712ms for default service account to be created ...
	I0924 01:08:57.108059   61699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:08:57.112551   61699 system_pods.go:86] 8 kube-system pods found
	I0924 01:08:57.112578   61699 system_pods.go:89] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.112584   61699 system_pods.go:89] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.112589   61699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.112593   61699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.112597   61699 system_pods.go:89] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.112600   61699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.112608   61699 system_pods.go:89] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.112613   61699 system_pods.go:89] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.112619   61699 system_pods.go:126] duration metric: took 4.555185ms to wait for k8s-apps to be running ...
	I0924 01:08:57.112625   61699 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:08:57.112665   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:57.127805   61699 system_svc.go:56] duration metric: took 15.170368ms WaitForService to wait for kubelet
	I0924 01:08:57.127839   61699 kubeadm.go:582] duration metric: took 4m23.691287248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:08:57.127865   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:08:57.130964   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:08:57.130994   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:08:57.131008   61699 node_conditions.go:105] duration metric: took 3.13793ms to run NodePressure ...
	I0924 01:08:57.131021   61699 start.go:241] waiting for startup goroutines ...
	I0924 01:08:57.131029   61699 start.go:246] waiting for cluster config update ...
	I0924 01:08:57.131043   61699 start.go:255] writing updated cluster config ...
	I0924 01:08:57.131388   61699 ssh_runner.go:195] Run: rm -f paused
	I0924 01:08:57.182238   61699 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:08:57.185023   61699 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-465341" cluster and "default" namespace by default
	I0924 01:08:55.229370   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:57.729448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:55.285390   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.785813   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.285570   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.785779   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.285599   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.786401   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.285583   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.786037   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.286404   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.447075   61323 kubeadm.go:1113] duration metric: took 4.340646509s to wait for elevateKubeSystemPrivileges
	I0924 01:08:59.447119   61323 kubeadm.go:394] duration metric: took 4m57.777127509s to StartCluster
	I0924 01:08:59.447141   61323 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.447229   61323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:08:59.449766   61323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.450091   61323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:08:59.450191   61323 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:08:59.450308   61323 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650507"
	I0924 01:08:59.450330   61323 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-650507"
	W0924 01:08:59.450343   61323 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:08:59.450346   61323 addons.go:69] Setting metrics-server=true in profile "embed-certs-650507"
	I0924 01:08:59.450349   61323 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650507"
	I0924 01:08:59.450366   61323 addons.go:234] Setting addon metrics-server=true in "embed-certs-650507"
	W0924 01:08:59.450374   61323 addons.go:243] addon metrics-server should already be in state true
	I0924 01:08:59.450328   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:08:59.450381   61323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650507"
	I0924 01:08:59.450404   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450375   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450718   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450770   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450805   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450808   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450895   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450842   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.451862   61323 out.go:177] * Verifying Kubernetes components...
	I0924 01:08:59.453214   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:08:59.471878   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0924 01:08:59.472083   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0924 01:08:59.472239   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0924 01:08:59.472586   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472704   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472988   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.473187   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473205   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473226   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473242   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473418   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473433   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474003   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.474116   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474383   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474422   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.474591   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474628   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.478726   61323 addons.go:234] Setting addon default-storageclass=true in "embed-certs-650507"
	W0924 01:08:59.478753   61323 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:08:59.478784   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.479137   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.479186   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.495021   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I0924 01:08:59.495527   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.496068   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.496090   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.496519   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.496734   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.498472   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.498528   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0924 01:08:59.498971   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.499485   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.499498   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.499794   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.499918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.500899   61323 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:08:59.501731   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.502154   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:08:59.502172   61323 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:08:59.502186   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.503238   61323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:08:59.504765   61323 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.504783   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:08:59.504801   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.505483   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0924 01:08:59.505882   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.506386   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.506408   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.506841   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.507463   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.507505   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.511098   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511611   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.511645   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511944   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.512127   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.512296   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.512493   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.514595   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515144   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.515173   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515481   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.515749   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.515963   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.516100   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.529920   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0924 01:08:59.530565   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.531102   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.531125   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.531629   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.531918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.533741   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.533992   61323 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.534007   61323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:08:59.534026   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.537032   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537488   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.537515   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537713   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.537919   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.538074   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.538198   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.680683   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:08:59.711414   61323 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721234   61323 node_ready.go:49] node "embed-certs-650507" has status "Ready":"True"
	I0924 01:08:59.721264   61323 node_ready.go:38] duration metric: took 9.820004ms for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721275   61323 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:59.736353   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:08:59.831004   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:08:59.831041   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:08:59.871681   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.873844   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.902662   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:08:59.902691   61323 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:08:59.956425   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:08:59.956454   61323 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:08:59.997902   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:09:01.146340   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.27245536s)
	I0924 01:09:01.146470   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146505   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146403   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.274685832s)
	I0924 01:09:01.146582   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146602   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146819   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146848   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146868   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146875   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.146882   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146892   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146967   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146990   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147007   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.147023   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.147084   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.147117   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147133   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147370   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147378   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180574   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.180604   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.180929   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180977   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.180986   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.207538   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209569759s)
	I0924 01:09:01.207600   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.207616   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.207959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.208002   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208011   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208019   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.208028   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.208377   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208402   61323 addons.go:475] Verifying addon metrics-server=true in "embed-certs-650507"
	I0924 01:09:01.208411   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.210500   61323 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:08:59.731184   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:02.229737   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:01.211900   61323 addons.go:510] duration metric: took 1.761718139s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:09:01.751463   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.242260   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.728708   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.728878   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.243002   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:06.243030   61323 pod_ready.go:82] duration metric: took 6.506649267s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:06.243039   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:08.249949   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:09.750009   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.750037   61323 pod_ready.go:82] duration metric: took 3.506990291s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.750049   61323 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756600   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.756626   61323 pod_ready.go:82] duration metric: took 6.570047ms for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756635   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762209   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.762235   61323 pod_ready.go:82] duration metric: took 5.593257ms for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762248   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772052   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.772075   61323 pod_ready.go:82] duration metric: took 9.818627ms for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772088   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777733   61323 pod_ready.go:93] pod "kube-proxy-mwtkg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.777765   61323 pod_ready.go:82] duration metric: took 5.669609ms for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777778   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146804   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:10.146833   61323 pod_ready.go:82] duration metric: took 369.046066ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146844   61323 pod_ready.go:39] duration metric: took 10.425557831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:09:10.146861   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:09:10.146918   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:09:10.162335   61323 api_server.go:72] duration metric: took 10.712204486s to wait for apiserver process to appear ...
	I0924 01:09:10.162360   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:09:10.162381   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:09:10.166693   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:09:10.167700   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:09:10.167723   61323 api_server.go:131] duration metric: took 5.355716ms to wait for apiserver health ...
	I0924 01:09:10.167734   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:09:10.351584   61323 system_pods.go:59] 9 kube-system pods found
	I0924 01:09:10.351621   61323 system_pods.go:61] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.351629   61323 system_pods.go:61] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.351634   61323 system_pods.go:61] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.351640   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.351645   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.351650   61323 system_pods.go:61] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.351655   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.351669   61323 system_pods.go:61] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.351678   61323 system_pods.go:61] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.351692   61323 system_pods.go:74] duration metric: took 183.950994ms to wait for pod list to return data ...
	I0924 01:09:10.351704   61323 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:09:10.547564   61323 default_sa.go:45] found service account: "default"
	I0924 01:09:10.547595   61323 default_sa.go:55] duration metric: took 195.882659ms for default service account to be created ...
	I0924 01:09:10.547605   61323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:09:10.750290   61323 system_pods.go:86] 9 kube-system pods found
	I0924 01:09:10.750327   61323 system_pods.go:89] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.750336   61323 system_pods.go:89] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.750344   61323 system_pods.go:89] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.750352   61323 system_pods.go:89] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.750359   61323 system_pods.go:89] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.750366   61323 system_pods.go:89] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.750372   61323 system_pods.go:89] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.750382   61323 system_pods.go:89] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.750391   61323 system_pods.go:89] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.750407   61323 system_pods.go:126] duration metric: took 202.795975ms to wait for k8s-apps to be running ...
	I0924 01:09:10.750416   61323 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:09:10.750476   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:09:10.765539   61323 system_svc.go:56] duration metric: took 15.112281ms WaitForService to wait for kubelet
	I0924 01:09:10.765569   61323 kubeadm.go:582] duration metric: took 11.31544538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:09:10.765588   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:09:10.947628   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:09:10.947654   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:09:10.947664   61323 node_conditions.go:105] duration metric: took 182.072269ms to run NodePressure ...
	I0924 01:09:10.947674   61323 start.go:241] waiting for startup goroutines ...
	I0924 01:09:10.947681   61323 start.go:246] waiting for cluster config update ...
	I0924 01:09:10.947691   61323 start.go:255] writing updated cluster config ...
	I0924 01:09:10.947955   61323 ssh_runner.go:195] Run: rm -f paused
	I0924 01:09:10.999208   61323 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:09:11.001392   61323 out.go:177] * Done! kubectl is now configured to use "embed-certs-650507" cluster and "default" namespace by default
	I0924 01:09:08.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:11.229036   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:13.727852   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:16.229362   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:18.727643   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:20.729183   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:22.731323   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:25.228514   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:27.728747   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:29.729150   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:32.228197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:31.365725   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:09:31.366444   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:31.366704   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:34.729441   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:37.228766   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:36.367209   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:36.367654   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:39.728035   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:41.729148   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:43.729240   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.228006   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:48.228134   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.367945   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:46.368128   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:50.228455   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:52.228646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:54.229158   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:56.727712   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:58.728522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:00.728964   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:02.729909   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:05.227781   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:07.228729   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:06.368912   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:06.369182   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:09.728977   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:10.222284   61070 pod_ready.go:82] duration metric: took 4m0.000274516s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:10:10.222354   61070 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:10:10.222381   61070 pod_ready.go:39] duration metric: took 4m12.043944079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:10.222412   61070 kubeadm.go:597] duration metric: took 4m56.454037737s to restartPrimaryControlPlane
	W0924 01:10:10.222488   61070 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:10:10.222536   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:36.533302   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.310734731s)
	I0924 01:10:36.533377   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:36.556961   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:10:36.568298   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:36.584098   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:36.584121   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:36.584178   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:36.594153   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:36.594218   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:36.612646   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:36.626433   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:36.626506   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:36.636161   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.654017   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:36.654075   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.663760   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:36.673737   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:36.673799   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:36.684005   61070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:36.731568   61070 kubeadm.go:310] W0924 01:10:36.713557    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.733592   61070 kubeadm.go:310] W0924 01:10:36.715675    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.850767   61070 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:45.349295   61070 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:10:45.349386   61070 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:45.349486   61070 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:45.349600   61070 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:45.349733   61070 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:10:45.349836   61070 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:45.351746   61070 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:45.351843   61070 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:45.351939   61070 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:45.352055   61070 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:45.352160   61070 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:45.352228   61070 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:45.352297   61070 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:45.352392   61070 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:45.352477   61070 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:45.352551   61070 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:45.352664   61070 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:45.352734   61070 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:45.352904   61070 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:45.352956   61070 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:45.353038   61070 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:10:45.353127   61070 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:45.353209   61070 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:45.353300   61070 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:45.353372   61070 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:45.353446   61070 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.354948   61070 out.go:235]   - Booting up control plane ...
	I0924 01:10:45.355023   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:45.355090   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:45.355144   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:45.355226   61070 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:45.355310   61070 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:45.355356   61070 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:45.355476   61070 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:10:45.355585   61070 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:10:45.355658   61070 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537437s
	I0924 01:10:45.355728   61070 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:10:45.355807   61070 kubeadm.go:310] [api-check] The API server is healthy after 5.002387582s
	I0924 01:10:45.355955   61070 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:10:45.356129   61070 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:10:45.356230   61070 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:10:45.356516   61070 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-674057 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:10:45.356571   61070 kubeadm.go:310] [bootstrap-token] Using token: g2v97n.iz49hjb4wh5cfkiq
	I0924 01:10:45.358203   61070 out.go:235]   - Configuring RBAC rules ...
	I0924 01:10:45.358333   61070 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:10:45.358439   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:10:45.358562   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:10:45.358667   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:10:45.358773   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:10:45.358851   61070 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:10:45.358997   61070 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:10:45.359059   61070 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:10:45.359101   61070 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:10:45.359111   61070 kubeadm.go:310] 
	I0924 01:10:45.359164   61070 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:10:45.359171   61070 kubeadm.go:310] 
	I0924 01:10:45.359263   61070 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:10:45.359280   61070 kubeadm.go:310] 
	I0924 01:10:45.359309   61070 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:10:45.359387   61070 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:10:45.359458   61070 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:10:45.359471   61070 kubeadm.go:310] 
	I0924 01:10:45.359559   61070 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:10:45.359568   61070 kubeadm.go:310] 
	I0924 01:10:45.359613   61070 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:10:45.359619   61070 kubeadm.go:310] 
	I0924 01:10:45.359665   61070 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:10:45.359728   61070 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:10:45.359800   61070 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:10:45.359813   61070 kubeadm.go:310] 
	I0924 01:10:45.359879   61070 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:10:45.359978   61070 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:10:45.359996   61070 kubeadm.go:310] 
	I0924 01:10:45.360101   61070 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360218   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:10:45.360251   61070 kubeadm.go:310] 	--control-plane 
	I0924 01:10:45.360258   61070 kubeadm.go:310] 
	I0924 01:10:45.360453   61070 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:10:45.360481   61070 kubeadm.go:310] 
	I0924 01:10:45.360595   61070 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360693   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:10:45.360706   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:10:45.360713   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:10:45.362153   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:10:46.371109   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:46.371309   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371318   61989 kubeadm.go:310] 
	I0924 01:10:46.371352   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:10:46.371455   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:10:46.371490   61989 kubeadm.go:310] 
	I0924 01:10:46.371546   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:10:46.371592   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:10:46.371734   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:10:46.371751   61989 kubeadm.go:310] 
	I0924 01:10:46.371888   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:10:46.371936   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:10:46.371978   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:10:46.371988   61989 kubeadm.go:310] 
	I0924 01:10:46.372124   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:10:46.372253   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:10:46.372262   61989 kubeadm.go:310] 
	I0924 01:10:46.372442   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:10:46.372578   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:10:46.372680   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:10:46.372756   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:10:46.372765   61989 kubeadm.go:310] 
	I0924 01:10:46.373578   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:46.373675   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:10:46.373790   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 01:10:46.373938   61989 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 01:10:46.373987   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:46.834432   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:46.851214   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:46.862648   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:46.862675   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:46.862733   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:46.873005   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:46.873073   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:46.884007   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:46.893944   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:46.894016   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:46.905036   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.914953   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:46.915024   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.924881   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:46.935132   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:46.935192   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:46.945837   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:47.018713   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:10:47.018861   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:47.159920   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:47.160042   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:47.160168   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:10:47.349360   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:47.351645   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:47.351763   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:47.351918   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:47.352040   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:47.352118   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:47.352205   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:47.352298   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:47.352401   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:47.352481   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:47.352574   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:47.352662   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:47.352705   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:47.352767   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:47.467301   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:47.622085   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:47.726807   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:47.951249   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:47.973392   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:47.974396   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:47.974440   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:48.127629   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.363348   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:10:45.374505   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:10:45.391838   61070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:10:45.391947   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:45.391999   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-674057 minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=no-preload-674057 minikube.k8s.io/primary=true
	I0924 01:10:45.583482   61070 ops.go:34] apiserver oom_adj: -16
	I0924 01:10:45.583498   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.083831   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.583990   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.084184   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.583925   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.083775   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.583654   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.084305   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.584636   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.084620   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.226320   61070 kubeadm.go:1113] duration metric: took 4.834429832s to wait for elevateKubeSystemPrivileges
	I0924 01:10:50.226363   61070 kubeadm.go:394] duration metric: took 5m36.514145334s to StartCluster
	I0924 01:10:50.226386   61070 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.226480   61070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:10:50.229196   61070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.229530   61070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:10:50.229600   61070 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:10:50.229703   61070 addons.go:69] Setting storage-provisioner=true in profile "no-preload-674057"
	I0924 01:10:50.229725   61070 addons.go:234] Setting addon storage-provisioner=true in "no-preload-674057"
	W0924 01:10:50.229733   61070 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:10:50.229735   61070 addons.go:69] Setting default-storageclass=true in profile "no-preload-674057"
	I0924 01:10:50.229756   61070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-674057"
	I0924 01:10:50.229764   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.229789   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:10:50.229781   61070 addons.go:69] Setting metrics-server=true in profile "no-preload-674057"
	I0924 01:10:50.229847   61070 addons.go:234] Setting addon metrics-server=true in "no-preload-674057"
	W0924 01:10:50.229855   61070 addons.go:243] addon metrics-server should already be in state true
	I0924 01:10:50.229871   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.230228   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230268   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230320   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230351   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230355   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230389   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.231531   61070 out.go:177] * Verifying Kubernetes components...
	I0924 01:10:50.233506   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:10:50.252485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0924 01:10:50.252844   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0924 01:10:50.253068   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.253217   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0924 01:10:50.253653   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.253676   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.253705   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254050   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254203   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254236   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254250   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.254591   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254814   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.254829   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254851   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.254864   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.254984   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.255389   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.255983   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.256028   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.258757   61070 addons.go:234] Setting addon default-storageclass=true in "no-preload-674057"
	W0924 01:10:50.258781   61070 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:10:50.258861   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.259186   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.259237   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.276636   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0924 01:10:50.276806   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0924 01:10:50.277196   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277312   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277771   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.277795   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278022   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.278044   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278213   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278380   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.278485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0924 01:10:50.278806   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278877   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.279106   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.279244   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.279260   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.279668   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.280215   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.280263   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.280315   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.281796   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.282123   61070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:10:50.283561   61070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:10:48.129312   61989 out.go:235]   - Booting up control plane ...
	I0924 01:10:48.129446   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:48.139821   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:48.143120   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:48.144038   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:48.146275   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:10:50.283656   61070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.283674   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:10:50.283688   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.284778   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:10:50.284793   61070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:10:50.284811   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.288106   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288477   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.288498   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288524   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288679   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.288867   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289019   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.289185   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.289309   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.289338   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.289613   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.289773   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289938   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.290073   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.323722   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0924 01:10:50.324221   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.324873   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.324901   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.325334   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.325572   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.327779   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.328071   61070 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.328092   61070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:10:50.328119   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.331721   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.331988   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.332022   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.332209   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.332455   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.332658   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.332837   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.471507   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:10:50.502289   61070 node_ready.go:35] waiting up to 6m0s for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522752   61070 node_ready.go:49] node "no-preload-674057" has status "Ready":"True"
	I0924 01:10:50.522784   61070 node_ready.go:38] duration metric: took 20.46398ms for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522797   61070 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:50.537297   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:50.576703   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.638655   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:10:50.638679   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:10:50.673535   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.691443   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:10:50.691475   61070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:10:50.791572   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:50.791596   61070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:10:50.887143   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:51.535179   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535211   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535247   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535269   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535531   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535553   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535563   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535571   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535572   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535584   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535591   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535598   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535809   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535830   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.536069   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.536078   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.536088   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.563511   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.563537   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.563856   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.563880   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.800860   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.800889   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801192   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801211   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801224   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.801233   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801527   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.801559   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801567   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801577   61070 addons.go:475] Verifying addon metrics-server=true in "no-preload-674057"
	I0924 01:10:51.803735   61070 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:10:51.805581   61070 addons.go:510] duration metric: took 1.575985263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:10:52.544028   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:53.564056   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.564089   61070 pod_ready.go:82] duration metric: took 3.026767371s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.564102   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573039   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.573076   61070 pod_ready.go:82] duration metric: took 8.965144ms for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573090   61070 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081080   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.081105   61070 pod_ready.go:82] duration metric: took 508.007072ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081115   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087054   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.087079   61070 pod_ready.go:82] duration metric: took 5.957569ms for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087091   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094018   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.094043   61070 pod_ready.go:82] duration metric: took 6.944048ms for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094053   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341307   61070 pod_ready.go:93] pod "kube-proxy-k54d7" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.341326   61070 pod_ready.go:82] duration metric: took 247.267987ms for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341335   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741702   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.741732   61070 pod_ready.go:82] duration metric: took 400.389532ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741742   61070 pod_ready.go:39] duration metric: took 4.218931841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:54.741759   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:10:54.741827   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:10:54.758692   61070 api_server.go:72] duration metric: took 4.529120201s to wait for apiserver process to appear ...
	I0924 01:10:54.758723   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:10:54.758744   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:10:54.764587   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:10:54.765620   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:10:54.765639   61070 api_server.go:131] duration metric: took 6.909845ms to wait for apiserver health ...
	I0924 01:10:54.765646   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:10:54.945080   61070 system_pods.go:59] 9 kube-system pods found
	I0924 01:10:54.945121   61070 system_pods.go:61] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:54.945128   61070 system_pods.go:61] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:54.945134   61070 system_pods.go:61] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:54.945140   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:54.945145   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:54.945150   61070 system_pods.go:61] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:54.945161   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:54.945172   61070 system_pods.go:61] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:54.945180   61070 system_pods.go:61] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:54.945191   61070 system_pods.go:74] duration metric: took 179.539019ms to wait for pod list to return data ...
	I0924 01:10:54.945205   61070 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:10:55.141944   61070 default_sa.go:45] found service account: "default"
	I0924 01:10:55.141973   61070 default_sa.go:55] duration metric: took 196.760922ms for default service account to be created ...
	I0924 01:10:55.141984   61070 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:10:55.344235   61070 system_pods.go:86] 9 kube-system pods found
	I0924 01:10:55.344273   61070 system_pods.go:89] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:55.344282   61070 system_pods.go:89] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:55.344288   61070 system_pods.go:89] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:55.344293   61070 system_pods.go:89] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:55.344297   61070 system_pods.go:89] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:55.344301   61070 system_pods.go:89] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:55.344304   61070 system_pods.go:89] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:55.344310   61070 system_pods.go:89] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:55.344315   61070 system_pods.go:89] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:55.344324   61070 system_pods.go:126] duration metric: took 202.334823ms to wait for k8s-apps to be running ...
	I0924 01:10:55.344361   61070 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:10:55.344406   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:55.361050   61070 system_svc.go:56] duration metric: took 16.6812ms WaitForService to wait for kubelet
	I0924 01:10:55.361082   61070 kubeadm.go:582] duration metric: took 5.13151522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:10:55.361104   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:10:55.541766   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:10:55.541799   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:10:55.541812   61070 node_conditions.go:105] duration metric: took 180.702708ms to run NodePressure ...
	I0924 01:10:55.541826   61070 start.go:241] waiting for startup goroutines ...
	I0924 01:10:55.541837   61070 start.go:246] waiting for cluster config update ...
	I0924 01:10:55.541850   61070 start.go:255] writing updated cluster config ...
	I0924 01:10:55.542100   61070 ssh_runner.go:195] Run: rm -f paused
	I0924 01:10:55.590629   61070 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:10:55.592850   61070 out.go:177] * Done! kubectl is now configured to use "no-preload-674057" cluster and "default" namespace by default
	I0924 01:11:28.148929   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:11:28.149086   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:28.149360   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:33.150102   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:33.150283   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:43.151281   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:43.151540   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:03.152338   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:03.152562   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151221   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:43.151503   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151532   61989 kubeadm.go:310] 
	I0924 01:12:43.151585   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:12:43.151645   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:12:43.151655   61989 kubeadm.go:310] 
	I0924 01:12:43.151729   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:12:43.151779   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:12:43.151940   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:12:43.151954   61989 kubeadm.go:310] 
	I0924 01:12:43.152095   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:12:43.152154   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:12:43.152201   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:12:43.152207   61989 kubeadm.go:310] 
	I0924 01:12:43.152294   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:12:43.152411   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:12:43.152424   61989 kubeadm.go:310] 
	I0924 01:12:43.152565   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:12:43.152653   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:12:43.152718   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:12:43.152794   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:12:43.152802   61989 kubeadm.go:310] 
	I0924 01:12:43.153600   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:12:43.153699   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:12:43.153757   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 01:12:43.153808   61989 kubeadm.go:394] duration metric: took 7m57.944266289s to StartCluster
	I0924 01:12:43.153845   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:12:43.153894   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:12:43.199866   61989 cri.go:89] found id: ""
	I0924 01:12:43.199896   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.199908   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:12:43.199916   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:12:43.199975   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:12:43.235387   61989 cri.go:89] found id: ""
	I0924 01:12:43.235420   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.235432   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:12:43.235441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:12:43.235513   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:12:43.271255   61989 cri.go:89] found id: ""
	I0924 01:12:43.271290   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.271303   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:12:43.271312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:12:43.271380   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:12:43.305842   61989 cri.go:89] found id: ""
	I0924 01:12:43.305870   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.305882   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:12:43.305891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:12:43.305952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:12:43.341956   61989 cri.go:89] found id: ""
	I0924 01:12:43.341983   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.342005   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:12:43.342013   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:12:43.342093   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:12:43.376362   61989 cri.go:89] found id: ""
	I0924 01:12:43.376399   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.376421   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:12:43.376431   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:12:43.376487   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:12:43.409351   61989 cri.go:89] found id: ""
	I0924 01:12:43.409378   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.409387   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:12:43.409392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:12:43.409459   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:12:43.442446   61989 cri.go:89] found id: ""
	I0924 01:12:43.442479   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.442487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:12:43.442497   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:12:43.442507   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:12:43.498980   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:12:43.499020   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:12:43.520090   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:12:43.520120   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:12:43.612212   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:12:43.612242   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:12:43.612255   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:12:43.727355   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:12:43.727395   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 01:12:43.770163   61989 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 01:12:43.770217   61989 out.go:270] * 
	W0924 01:12:43.770282   61989 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.770297   61989 out.go:270] * 
	W0924 01:12:43.771298   61989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:12:43.775708   61989 out.go:201] 
	W0924 01:12:43.777139   61989 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.777186   61989 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 01:12:43.777214   61989 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 01:12:43.779580   61989 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.753828793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140365753808666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46cc148e-f294-45ef-830c-17372f90b9a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.754346377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee342f88-1563-489c-9ffa-a5c946a15948 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.754404350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee342f88-1563-489c-9ffa-a5c946a15948 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.754455347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ee342f88-1563-489c-9ffa-a5c946a15948 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.790136145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b6e8f89-5a64-4efb-99b0-978011132da2 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.790253731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b6e8f89-5a64-4efb-99b0-978011132da2 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.791630468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f992659-80e7-44f5-a32c-62f3bb5d778a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.792178103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140365792145228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f992659-80e7-44f5-a32c-62f3bb5d778a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.792872194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3f5d719-32f1-4227-b2fb-a6b4c17aaf1d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.792949520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3f5d719-32f1-4227-b2fb-a6b4c17aaf1d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.793034560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e3f5d719-32f1-4227-b2fb-a6b4c17aaf1d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.825795107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a753ab41-28c8-4537-bca9-7309abb19c1b name=/runtime.v1.RuntimeService/Version
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.825882638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a753ab41-28c8-4537-bca9-7309abb19c1b name=/runtime.v1.RuntimeService/Version
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.827061518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b02cf222-39b2-472c-80bd-7dec688ef452 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.827476020Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140365827448447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b02cf222-39b2-472c-80bd-7dec688ef452 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.828044902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d46fce95-a941-4e69-8870-30dc92636377 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.828112457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d46fce95-a941-4e69-8870-30dc92636377 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.828145540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d46fce95-a941-4e69-8870-30dc92636377 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.860953202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f64c871-05a3-452c-86bb-0c7bcf2cbd2a name=/runtime.v1.RuntimeService/Version
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.861136773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f64c871-05a3-452c-86bb-0c7bcf2cbd2a name=/runtime.v1.RuntimeService/Version
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.862148445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4aedc978-4b11-42bf-a1a8-1deea180adad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.862502466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140365862484344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4aedc978-4b11-42bf-a1a8-1deea180adad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.863115053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98e330f9-7898-48ad-8ede-75c59933bed6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.863162922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98e330f9-7898-48ad-8ede-75c59933bed6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:12:45 old-k8s-version-171598 crio[631]: time="2024-09-24 01:12:45.863193072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=98e330f9-7898-48ad-8ede-75c59933bed6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep24 01:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051965] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048547] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.882363] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935977] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544938] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.695614] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.066394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068035] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.210501] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.125361] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.257875] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.688915] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.058357] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.792508] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +11.354084] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 01:08] systemd-fstab-generator[5046]: Ignoring "noauto" option for root device
	[Sep24 01:10] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.074932] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:12:46 up 8 min,  0 users,  load average: 0.08, 0.16, 0.09
	Linux old-k8s-version-171598 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc00090d110, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: net.cgoIPLookup(0xc0009da1e0, 0x48ab5d6, 0x3, 0xc00090d110, 0x1f)
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: created by net.cgoLookupIP
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: goroutine 146 [runnable]:
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0000f8f50, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000030c60, 0x0, 0x0)
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0005e6700)
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 24 01:12:42 old-k8s-version-171598 kubelet[5500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 24 01:12:42 old-k8s-version-171598 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 24 01:12:42 old-k8s-version-171598 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 24 01:12:43 old-k8s-version-171598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 24 01:12:43 old-k8s-version-171598 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 24 01:12:43 old-k8s-version-171598 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 24 01:12:43 old-k8s-version-171598 kubelet[5548]: I0924 01:12:43.556021    5548 server.go:416] Version: v1.20.0
	Sep 24 01:12:43 old-k8s-version-171598 kubelet[5548]: I0924 01:12:43.556385    5548 server.go:837] Client rotation is on, will bootstrap in background
	Sep 24 01:12:43 old-k8s-version-171598 kubelet[5548]: I0924 01:12:43.558422    5548 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 24 01:12:43 old-k8s-version-171598 kubelet[5548]: W0924 01:12:43.559537    5548 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 24 01:12:43 old-k8s-version-171598 kubelet[5548]: I0924 01:12:43.559580    5548 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (224.993735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-171598" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (726.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-24 01:17:57.738304893 +0000 UTC m=+6019.169387345
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-465341 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-465341 logs -n 25: (2.178171402s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-075175                              | stopped-upgrade-075175       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-619300                           | kubernetes-upgrade-619300    | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:00:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:00:40.983605   61989 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:00:40.983716   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983722   61989 out.go:358] Setting ErrFile to fd 2...
	I0924 01:00:40.983728   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983918   61989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:00:40.984500   61989 out.go:352] Setting JSON to false
	I0924 01:00:40.985412   61989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6185,"bootTime":1727133456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:00:40.985513   61989 start.go:139] virtualization: kvm guest
	I0924 01:00:40.987848   61989 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:00:40.989366   61989 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:00:40.989467   61989 notify.go:220] Checking for updates...
	I0924 01:00:40.992462   61989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:00:40.994144   61989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:00:40.995782   61989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:00:40.997503   61989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:00:40.999038   61989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:00:41.000959   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:00:41.001315   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.001388   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.017304   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0924 01:00:41.017751   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.018320   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.018355   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.018708   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.018964   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.021075   61989 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:00:41.022764   61989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:00:41.023156   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.023204   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.038764   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0924 01:00:41.039238   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.039828   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.039856   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.040272   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.040569   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.078622   61989 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:00:41.079930   61989 start.go:297] selected driver: kvm2
	I0924 01:00:41.079945   61989 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.080076   61989 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:00:41.080841   61989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.080927   61989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:00:41.096851   61989 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:00:41.097306   61989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:00:41.097345   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:00:41.097410   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:00:41.097465   61989 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.097610   61989 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.099797   61989 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 01:00:39.376584   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:41.101644   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:00:41.101691   61989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 01:00:41.101704   61989 cache.go:56] Caching tarball of preloaded images
	I0924 01:00:41.101801   61989 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:00:41.101816   61989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 01:00:41.101922   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:00:41.102126   61989 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:00:45.456606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:48.528618   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:54.608639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:57.680645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:03.760641   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:06.832676   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:12.912635   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:15.984629   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:22.064669   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:25.136609   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:31.216643   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:34.288667   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:40.368636   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:43.440700   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:49.520634   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:52.592658   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:58.672637   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:01.744679   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:07.824597   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:10.896693   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:16.976656   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:20.048675   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:26.128638   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:29.200595   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:35.280645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:38.352665   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:44.432606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:47.504721   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:53.584645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:56.656617   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:02.736686   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:05.808671   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:11.888586   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:14.960688   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:21.040639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:24.112705   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:30.192631   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:33.264655   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:36.269218   61323 start.go:364] duration metric: took 4m25.932369998s to acquireMachinesLock for "embed-certs-650507"
	I0924 01:03:36.269290   61323 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:36.269298   61323 fix.go:54] fixHost starting: 
	I0924 01:03:36.269661   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:36.269714   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:36.285429   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0924 01:03:36.285943   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:36.286516   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:03:36.286557   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:36.286885   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:36.287078   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:36.287213   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:03:36.288895   61323 fix.go:112] recreateIfNeeded on embed-certs-650507: state=Stopped err=<nil>
	I0924 01:03:36.288917   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	W0924 01:03:36.289113   61323 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:36.291435   61323 out.go:177] * Restarting existing kvm2 VM for "embed-certs-650507" ...
	I0924 01:03:36.266390   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:36.266435   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.266788   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:03:36.266816   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.267022   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:03:36.269105   61070 machine.go:96] duration metric: took 4m37.426687547s to provisionDockerMachine
	I0924 01:03:36.269142   61070 fix.go:56] duration metric: took 4m37.448766856s for fixHost
	I0924 01:03:36.269148   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 4m37.448847609s
	W0924 01:03:36.269167   61070 start.go:714] error starting host: provision: host is not running
	W0924 01:03:36.269264   61070 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 01:03:36.269274   61070 start.go:729] Will try again in 5 seconds ...
	I0924 01:03:36.293006   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Start
	I0924 01:03:36.293199   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring networks are active...
	I0924 01:03:36.294032   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network default is active
	I0924 01:03:36.294359   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network mk-embed-certs-650507 is active
	I0924 01:03:36.294718   61323 main.go:141] libmachine: (embed-certs-650507) Getting domain xml...
	I0924 01:03:36.295407   61323 main.go:141] libmachine: (embed-certs-650507) Creating domain...
	I0924 01:03:37.516049   61323 main.go:141] libmachine: (embed-certs-650507) Waiting to get IP...
	I0924 01:03:37.516959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.517374   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.517443   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.517352   62594 retry.go:31] will retry after 278.072635ms: waiting for machine to come up
	I0924 01:03:37.796796   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.797276   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.797301   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.797242   62594 retry.go:31] will retry after 387.413297ms: waiting for machine to come up
	I0924 01:03:38.185869   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.186239   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.186258   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.186193   62594 retry.go:31] will retry after 363.798568ms: waiting for machine to come up
	I0924 01:03:38.551772   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.552181   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.552221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.552122   62594 retry.go:31] will retry after 392.798012ms: waiting for machine to come up
	I0924 01:03:38.946523   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.947069   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.947097   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.947018   62594 retry.go:31] will retry after 541.413772ms: waiting for machine to come up
	I0924 01:03:39.489873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:39.490278   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:39.490307   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:39.490226   62594 retry.go:31] will retry after 804.62107ms: waiting for machine to come up
	I0924 01:03:41.271024   61070 start.go:360] acquireMachinesLock for no-preload-674057: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:03:40.296290   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:40.296775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:40.296806   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:40.296726   62594 retry.go:31] will retry after 882.018637ms: waiting for machine to come up
	I0924 01:03:41.180799   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:41.181242   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:41.181263   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:41.181197   62594 retry.go:31] will retry after 961.194045ms: waiting for machine to come up
	I0924 01:03:42.143878   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:42.144354   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:42.144379   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:42.144270   62594 retry.go:31] will retry after 1.647837023s: waiting for machine to come up
	I0924 01:03:43.793458   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:43.793892   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:43.793933   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:43.793873   62594 retry.go:31] will retry after 1.751902059s: waiting for machine to come up
	I0924 01:03:45.547905   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:45.548356   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:45.548388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:45.548313   62594 retry.go:31] will retry after 2.380106471s: waiting for machine to come up
	I0924 01:03:47.931021   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:47.931513   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:47.931537   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:47.931456   62594 retry.go:31] will retry after 2.395516641s: waiting for machine to come up
	I0924 01:03:50.328214   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:50.328766   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:50.328791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:50.328729   62594 retry.go:31] will retry after 4.41219579s: waiting for machine to come up
	I0924 01:03:54.745159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745572   61323 main.go:141] libmachine: (embed-certs-650507) Found IP for machine: 192.168.39.104
	I0924 01:03:54.745606   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has current primary IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745615   61323 main.go:141] libmachine: (embed-certs-650507) Reserving static IP address...
	I0924 01:03:54.746020   61323 main.go:141] libmachine: (embed-certs-650507) Reserved static IP address: 192.168.39.104
	I0924 01:03:54.746042   61323 main.go:141] libmachine: (embed-certs-650507) Waiting for SSH to be available...
	I0924 01:03:54.746067   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.746134   61323 main.go:141] libmachine: (embed-certs-650507) DBG | skip adding static IP to network mk-embed-certs-650507 - found existing host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"}
	I0924 01:03:54.746159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Getting to WaitForSSH function...
	I0924 01:03:54.748464   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.748871   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.748906   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.749083   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH client type: external
	I0924 01:03:54.749118   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa (-rw-------)
	I0924 01:03:54.749153   61323 main.go:141] libmachine: (embed-certs-650507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:03:54.749165   61323 main.go:141] libmachine: (embed-certs-650507) DBG | About to run SSH command:
	I0924 01:03:54.749177   61323 main.go:141] libmachine: (embed-certs-650507) DBG | exit 0
	I0924 01:03:54.872532   61323 main.go:141] libmachine: (embed-certs-650507) DBG | SSH cmd err, output: <nil>: 
	I0924 01:03:54.872869   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetConfigRaw
	I0924 01:03:54.873480   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:54.876545   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.876922   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.876953   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.877204   61323 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/config.json ...
	I0924 01:03:54.877443   61323 machine.go:93] provisionDockerMachine start ...
	I0924 01:03:54.877467   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:54.877683   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.879873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880200   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.880221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880375   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.880546   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880681   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880866   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.881002   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.881194   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.881207   61323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:03:54.984605   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:03:54.984636   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.984922   61323 buildroot.go:166] provisioning hostname "embed-certs-650507"
	I0924 01:03:54.984948   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.985185   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.988284   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988699   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.988725   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988857   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.989069   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989344   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989529   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.989731   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.989899   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.989913   61323 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650507 && echo "embed-certs-650507" | sudo tee /etc/hostname
	I0924 01:03:55.106214   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650507
	
	I0924 01:03:55.106273   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.109000   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109310   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.109334   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109498   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.109646   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109989   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.110123   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.110303   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.110318   61323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:03:55.220699   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:55.220738   61323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:03:55.220755   61323 buildroot.go:174] setting up certificates
	I0924 01:03:55.220763   61323 provision.go:84] configureAuth start
	I0924 01:03:55.220771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:55.221112   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.224166   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224603   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.224634   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.226847   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227167   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.227194   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227308   61323 provision.go:143] copyHostCerts
	I0924 01:03:55.227386   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:03:55.227409   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:03:55.227490   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:03:55.227641   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:03:55.227653   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:03:55.227695   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:03:55.227781   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:03:55.227791   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:03:55.227826   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:03:55.227909   61323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650507 san=[127.0.0.1 192.168.39.104 embed-certs-650507 localhost minikube]
	I0924 01:03:55.917061   61699 start.go:364] duration metric: took 3m46.693519233s to acquireMachinesLock for "default-k8s-diff-port-465341"
	I0924 01:03:55.917135   61699 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:55.917144   61699 fix.go:54] fixHost starting: 
	I0924 01:03:55.917553   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:55.917606   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:55.937566   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0924 01:03:55.937971   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:55.938529   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:03:55.938556   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:55.938923   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:55.939182   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:03:55.939365   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:03:55.941155   61699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-465341: state=Stopped err=<nil>
	I0924 01:03:55.941197   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	W0924 01:03:55.941417   61699 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:55.943640   61699 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-465341" ...
	I0924 01:03:55.309866   61323 provision.go:177] copyRemoteCerts
	I0924 01:03:55.309928   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:03:55.309955   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.312946   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313365   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.313388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313638   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.313889   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.314062   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.314206   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.394427   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:03:55.420595   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 01:03:55.444377   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:03:55.467261   61323 provision.go:87] duration metric: took 246.485242ms to configureAuth
	I0924 01:03:55.467302   61323 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:03:55.467483   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:03:55.467552   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.470146   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470539   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.470572   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470719   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.470961   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471101   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471299   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.471450   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.471653   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.471676   61323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:03:55.688189   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:03:55.688218   61323 machine.go:96] duration metric: took 810.761675ms to provisionDockerMachine
	I0924 01:03:55.688230   61323 start.go:293] postStartSetup for "embed-certs-650507" (driver="kvm2")
	I0924 01:03:55.688244   61323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:03:55.688266   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.688659   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:03:55.688690   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.691375   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691761   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.691791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691881   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.692105   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.692309   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.692453   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.775412   61323 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:03:55.779423   61323 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:03:55.779448   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:03:55.779536   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:03:55.779629   61323 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:03:55.779742   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:03:55.788717   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:03:55.811673   61323 start.go:296] duration metric: took 123.428914ms for postStartSetup
	I0924 01:03:55.811717   61323 fix.go:56] duration metric: took 19.542419045s for fixHost
	I0924 01:03:55.811743   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.814745   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815034   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.815062   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815247   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.815449   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815851   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.816012   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.816168   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.816178   61323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:03:55.916845   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139835.894204557
	
	I0924 01:03:55.916883   61323 fix.go:216] guest clock: 1727139835.894204557
	I0924 01:03:55.916896   61323 fix.go:229] Guest: 2024-09-24 01:03:55.894204557 +0000 UTC Remote: 2024-09-24 01:03:55.811721448 +0000 UTC m=+285.612741728 (delta=82.483109ms)
	I0924 01:03:55.916935   61323 fix.go:200] guest clock delta is within tolerance: 82.483109ms
	I0924 01:03:55.916945   61323 start.go:83] releasing machines lock for "embed-certs-650507", held for 19.6476761s
	I0924 01:03:55.916990   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.917314   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.920105   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920550   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.920583   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920832   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921327   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921510   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921578   61323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:03:55.921634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.921747   61323 ssh_runner.go:195] Run: cat /version.json
	I0924 01:03:55.921771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.924238   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924430   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924717   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924741   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924792   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924953   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925061   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925153   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925277   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925360   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925439   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925582   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.925626   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:56.005229   61323 ssh_runner.go:195] Run: systemctl --version
	I0924 01:03:56.046189   61323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:03:56.187701   61323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:03:56.193313   61323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:03:56.193379   61323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:03:56.209278   61323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:03:56.209298   61323 start.go:495] detecting cgroup driver to use...
	I0924 01:03:56.209363   61323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:03:56.226995   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:03:56.241102   61323 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:03:56.241160   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:03:56.255002   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:03:56.269805   61323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:03:56.387382   61323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:03:56.545138   61323 docker.go:233] disabling docker service ...
	I0924 01:03:56.545220   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:03:56.559017   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:03:56.571939   61323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:03:56.694139   61323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:03:56.811253   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:03:56.825480   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:03:56.842777   61323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:03:56.842830   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.852387   61323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:03:56.852447   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.862702   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.872790   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.882864   61323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:03:56.893029   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.903314   61323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.923491   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.933424   61323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:03:56.944496   61323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:03:56.944561   61323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:03:56.957077   61323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:03:56.968602   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:03:57.080955   61323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:03:57.179826   61323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:03:57.179900   61323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:03:57.184652   61323 start.go:563] Will wait 60s for crictl version
	I0924 01:03:57.184716   61323 ssh_runner.go:195] Run: which crictl
	I0924 01:03:57.190300   61323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:03:57.239310   61323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:03:57.239371   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.266833   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.301876   61323 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:03:55.945290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Start
	I0924 01:03:55.945498   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring networks are active...
	I0924 01:03:55.946346   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network default is active
	I0924 01:03:55.946726   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network mk-default-k8s-diff-port-465341 is active
	I0924 01:03:55.947152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Getting domain xml...
	I0924 01:03:55.947872   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Creating domain...
	I0924 01:03:57.236194   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting to get IP...
	I0924 01:03:57.237037   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237445   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237497   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.237413   62713 retry.go:31] will retry after 286.244795ms: waiting for machine to come up
	I0924 01:03:57.525009   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525595   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525621   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.525548   62713 retry.go:31] will retry after 273.807213ms: waiting for machine to come up
	I0924 01:03:57.801217   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801734   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801756   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.801701   62713 retry.go:31] will retry after 371.291567ms: waiting for machine to come up
	I0924 01:03:58.174283   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174746   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174781   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.174692   62713 retry.go:31] will retry after 595.157579ms: waiting for machine to come up
	I0924 01:03:58.771428   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771900   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771925   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.771862   62713 retry.go:31] will retry after 734.305784ms: waiting for machine to come up
	I0924 01:03:57.303135   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:57.306110   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306598   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:57.306624   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306783   61323 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 01:03:57.310829   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:03:57.322605   61323 kubeadm.go:883] updating cluster {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:03:57.322715   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:03:57.322761   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:03:57.358040   61323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:03:57.358104   61323 ssh_runner.go:195] Run: which lz4
	I0924 01:03:57.361948   61323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:03:57.365911   61323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:03:57.365950   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:03:58.651636   61323 crio.go:462] duration metric: took 1.289721413s to copy over tarball
	I0924 01:03:58.651708   61323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:03:59.507803   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508308   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:59.508237   62713 retry.go:31] will retry after 875.394603ms: waiting for machine to come up
	I0924 01:04:00.385279   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385713   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385748   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:00.385655   62713 retry.go:31] will retry after 885.980109ms: waiting for machine to come up
	I0924 01:04:01.273114   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273545   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273590   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:01.273535   62713 retry.go:31] will retry after 935.451975ms: waiting for machine to come up
	I0924 01:04:02.210920   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211423   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:02.211331   62713 retry.go:31] will retry after 1.254573538s: waiting for machine to come up
	I0924 01:04:03.467027   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467593   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467626   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:03.467488   62713 retry.go:31] will retry after 2.044247818s: waiting for machine to come up
	I0924 01:04:00.805580   61323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153837858s)
	I0924 01:04:00.805608   61323 crio.go:469] duration metric: took 2.153947595s to extract the tarball
	I0924 01:04:00.805617   61323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:00.846074   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:00.895803   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:00.895833   61323 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:00.895842   61323 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I0924 01:04:00.895966   61323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-650507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:00.896041   61323 ssh_runner.go:195] Run: crio config
	I0924 01:04:00.941958   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:00.941985   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:00.941998   61323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:00.942029   61323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650507 NodeName:embed-certs-650507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:00.942202   61323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650507"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:00.942292   61323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:00.952748   61323 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:00.952853   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:00.962984   61323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0924 01:04:00.980030   61323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:01.001571   61323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0924 01:04:01.018760   61323 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:01.022770   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:01.034816   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:01.157888   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:01.175883   61323 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507 for IP: 192.168.39.104
	I0924 01:04:01.175911   61323 certs.go:194] generating shared ca certs ...
	I0924 01:04:01.175937   61323 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:01.176134   61323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:01.176198   61323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:01.176211   61323 certs.go:256] generating profile certs ...
	I0924 01:04:01.176324   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/client.key
	I0924 01:04:01.176441   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key.86682f38
	I0924 01:04:01.176515   61323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key
	I0924 01:04:01.176640   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:01.176669   61323 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:01.176678   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:01.176713   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:01.176749   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:01.176778   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:01.176987   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:01.177918   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:01.221682   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:01.266005   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:01.299467   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:01.324598   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 01:04:01.349526   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:01.385589   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:01.409713   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:04:01.433745   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:01.457493   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:01.482197   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:01.505740   61323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:01.524029   61323 ssh_runner.go:195] Run: openssl version
	I0924 01:04:01.530147   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:01.541117   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545823   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545894   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.551638   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:01.562373   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:01.573502   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578561   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578634   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.584415   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:01.595312   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:01.606503   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611530   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611602   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.618484   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:01.629332   61323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:01.634238   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:01.640266   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:01.646306   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:01.652510   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:01.658237   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:01.663962   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:01.669998   61323 kubeadm.go:392] StartCluster: {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:01.670105   61323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:01.670162   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.706478   61323 cri.go:89] found id: ""
	I0924 01:04:01.706555   61323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:01.717106   61323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:01.717127   61323 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:01.717188   61323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:01.729966   61323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:01.730947   61323 kubeconfig.go:125] found "embed-certs-650507" server: "https://192.168.39.104:8443"
	I0924 01:04:01.732933   61323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:01.745538   61323 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.104
	I0924 01:04:01.745581   61323 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:01.745594   61323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:01.745649   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.783313   61323 cri.go:89] found id: ""
	I0924 01:04:01.783423   61323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:01.801432   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:01.811282   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:01.811308   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:01.811371   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:01.820717   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:01.820780   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:01.830289   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:01.839383   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:01.839449   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:01.848920   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.857986   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:01.858045   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.867465   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:01.876598   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:01.876680   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:01.886122   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:01.896245   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:02.004839   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.077983   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073104284s)
	I0924 01:04:03.078020   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.295254   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.369968   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.458283   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:03.458383   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:03.958648   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.459039   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.958614   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.994450   61323 api_server.go:72] duration metric: took 1.536167442s to wait for apiserver process to appear ...
	I0924 01:04:04.994485   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:04.994530   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:04.995139   61323 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0924 01:04:05.513732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514247   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514275   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:05.514201   62713 retry.go:31] will retry after 2.814717647s: waiting for machine to come up
	I0924 01:04:08.331550   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331964   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:08.331932   62713 retry.go:31] will retry after 2.942261445s: waiting for machine to come up
	I0924 01:04:05.495090   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:07.946057   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:07.946116   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:07.946135   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.018665   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.018711   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.018729   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.027105   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.027144   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.494630   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.500471   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.500494   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.995055   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.017236   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:09.017272   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:09.494769   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.500285   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:04:09.507440   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:09.507470   61323 api_server.go:131] duration metric: took 4.512953508s to wait for apiserver health ...
	I0924 01:04:09.507478   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:09.507485   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:09.509661   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:09.511104   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:09.529080   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:09.567695   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:09.579425   61323 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:09.579470   61323 system_pods.go:61] "coredns-7c65d6cfc9-xgs6g" [b975196f-e9e6-4e30-a49b-8d3031f73a21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:09.579489   61323 system_pods.go:61] "etcd-embed-certs-650507" [c24d7e21-08a8-42bd-9def-1808d8a58e07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:09.579501   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f1de6ed5-a87f-4d1d-8feb-d0f80851b5b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:09.579509   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [d0d454bf-b9d3-4dcb-957c-f1329e4e9e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:09.579516   61323 system_pods.go:61] "kube-proxy-qd4lg" [f06c009f-3c62-4e54-82fd-ca468fb05bbc] Running
	I0924 01:04:09.579523   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [e4931370-821e-4289-9b2b-9b46d9f8394e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:09.579532   61323 system_pods.go:61] "metrics-server-6867b74b74-pc28v" [688d7bbe-9fee-450f-aecf-bbb3413a3633] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:09.579536   61323 system_pods.go:61] "storage-provisioner" [9e354a3c-e4f1-46e1-b5fb-de8243f41c29] Running
	I0924 01:04:09.579542   61323 system_pods.go:74] duration metric: took 11.824796ms to wait for pod list to return data ...
	I0924 01:04:09.579550   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:09.584175   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:09.584203   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:09.584214   61323 node_conditions.go:105] duration metric: took 4.659859ms to run NodePressure ...
	I0924 01:04:09.584230   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:09.847130   61323 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:09.851985   61323 kubeadm.go:739] kubelet initialised
	I0924 01:04:09.852008   61323 kubeadm.go:740] duration metric: took 4.853319ms waiting for restarted kubelet to initialise ...
	I0924 01:04:09.852015   61323 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:09.857149   61323 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:11.275680   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276135   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276166   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:11.276102   62713 retry.go:31] will retry after 3.599939746s: waiting for machine to come up
	I0924 01:04:11.865712   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:13.864779   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:13.864801   61323 pod_ready.go:82] duration metric: took 4.007625744s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:13.864809   61323 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.233175   61989 start.go:364] duration metric: took 3m35.131018203s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 01:04:16.233254   61989 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:16.233262   61989 fix.go:54] fixHost starting: 
	I0924 01:04:16.233733   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:16.233787   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:16.255690   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0924 01:04:16.256135   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:16.256729   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:04:16.256763   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:16.257122   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:16.257365   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:16.257560   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 01:04:16.259055   61989 fix.go:112] recreateIfNeeded on old-k8s-version-171598: state=Stopped err=<nil>
	I0924 01:04:16.259091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	W0924 01:04:16.259266   61989 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:16.261327   61989 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	I0924 01:04:14.879977   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880533   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has current primary IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880563   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Found IP for machine: 192.168.61.186
	I0924 01:04:14.880596   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserving static IP address...
	I0924 01:04:14.881148   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.881171   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | skip adding static IP to network mk-default-k8s-diff-port-465341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"}
	I0924 01:04:14.881188   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserved static IP address: 192.168.61.186
	I0924 01:04:14.881216   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for SSH to be available...
	I0924 01:04:14.881229   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Getting to WaitForSSH function...
	I0924 01:04:14.883679   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884060   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.884083   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884214   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH client type: external
	I0924 01:04:14.884248   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa (-rw-------)
	I0924 01:04:14.884276   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:14.884287   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | About to run SSH command:
	I0924 01:04:14.884298   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | exit 0
	I0924 01:04:15.012764   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:15.013163   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetConfigRaw
	I0924 01:04:15.013983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.016664   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017173   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.017207   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017440   61699 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/config.json ...
	I0924 01:04:15.017668   61699 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:15.017687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.017915   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.020388   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.020816   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.020839   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.021074   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.021249   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021513   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021681   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.021850   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.022031   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.022041   61699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:15.132672   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:15.132706   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.132994   61699 buildroot.go:166] provisioning hostname "default-k8s-diff-port-465341"
	I0924 01:04:15.133025   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.133268   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.135929   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136371   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.136399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136578   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.136850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137008   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137193   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.137407   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.137589   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.137610   61699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-465341 && echo "default-k8s-diff-port-465341" | sudo tee /etc/hostname
	I0924 01:04:15.262142   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-465341
	
	I0924 01:04:15.262174   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.265359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265736   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.265761   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265962   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.266176   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266335   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266510   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.266705   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.266903   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.266926   61699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-465341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-465341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-465341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:15.385085   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:15.385122   61699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:15.385158   61699 buildroot.go:174] setting up certificates
	I0924 01:04:15.385174   61699 provision.go:84] configureAuth start
	I0924 01:04:15.385186   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.385556   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.388350   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388798   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.388828   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388985   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.391478   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391793   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.391823   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391952   61699 provision.go:143] copyHostCerts
	I0924 01:04:15.392016   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:15.392045   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:15.392115   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:15.392259   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:15.392272   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:15.392306   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:15.392406   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:15.392415   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:15.392440   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:15.392503   61699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-465341 san=[127.0.0.1 192.168.61.186 default-k8s-diff-port-465341 localhost minikube]
	I0924 01:04:15.572588   61699 provision.go:177] copyRemoteCerts
	I0924 01:04:15.572682   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:15.572718   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.575884   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.576401   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.576868   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.577099   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.577248   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:15.662231   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:15.686800   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 01:04:15.709860   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:04:15.738063   61699 provision.go:87] duration metric: took 352.876914ms to configureAuth
	I0924 01:04:15.738105   61699 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:15.738302   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:15.738420   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.741231   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741644   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.741693   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741835   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.742036   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742218   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.742526   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.742727   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.742754   61699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:15.986096   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:15.986128   61699 machine.go:96] duration metric: took 968.446778ms to provisionDockerMachine
	I0924 01:04:15.986143   61699 start.go:293] postStartSetup for "default-k8s-diff-port-465341" (driver="kvm2")
	I0924 01:04:15.986156   61699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:15.986183   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.986639   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:15.986674   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.989692   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990094   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.990124   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990407   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.990643   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.990826   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.990958   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.079174   61699 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:16.083139   61699 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:16.083168   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:16.083251   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:16.083363   61699 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:16.083486   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:16.094571   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:16.117327   61699 start.go:296] duration metric: took 131.16913ms for postStartSetup
	I0924 01:04:16.117364   61699 fix.go:56] duration metric: took 20.200222398s for fixHost
	I0924 01:04:16.117384   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.120507   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.120857   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.120899   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.121059   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.121325   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121511   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.121901   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:16.122100   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:16.122113   61699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:16.232986   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139856.205476339
	
	I0924 01:04:16.233013   61699 fix.go:216] guest clock: 1727139856.205476339
	I0924 01:04:16.233024   61699 fix.go:229] Guest: 2024-09-24 01:04:16.205476339 +0000 UTC Remote: 2024-09-24 01:04:16.117368802 +0000 UTC m=+247.038042336 (delta=88.107537ms)
	I0924 01:04:16.233086   61699 fix.go:200] guest clock delta is within tolerance: 88.107537ms
	I0924 01:04:16.233094   61699 start.go:83] releasing machines lock for "default-k8s-diff-port-465341", held for 20.315992151s
	I0924 01:04:16.233133   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.233491   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:16.236719   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237104   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.237134   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.237850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238019   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238116   61699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:16.238167   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.238227   61699 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:16.238260   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.241123   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241448   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241598   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241916   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.241982   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.242152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242225   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242351   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242479   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242543   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.242880   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.368841   61699 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:16.374990   61699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:16.521604   61699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:16.527198   61699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:16.527290   61699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:16.543251   61699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:16.543278   61699 start.go:495] detecting cgroup driver to use...
	I0924 01:04:16.543357   61699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:16.561775   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:16.576028   61699 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:16.576097   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:16.591757   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:16.607927   61699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:16.753944   61699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:16.917338   61699 docker.go:233] disabling docker service ...
	I0924 01:04:16.917401   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:16.935104   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:16.949717   61699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:17.088275   61699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:17.222093   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:17.236370   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:17.256277   61699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:17.256360   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.266516   61699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:17.266600   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.276647   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.288283   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.299232   61699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:17.311336   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.329416   61699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.351465   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.362248   61699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:17.372102   61699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:17.372154   61699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:17.392055   61699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:17.413641   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:17.541224   61699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:17.655205   61699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:17.655281   61699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:17.660096   61699 start.go:563] Will wait 60s for crictl version
	I0924 01:04:17.660163   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:04:17.663880   61699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:17.706878   61699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:17.706959   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.735377   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.766744   61699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:17.768253   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:17.771534   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.771952   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:17.771983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.772230   61699 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:17.776486   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:17.792599   61699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:17.792744   61699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:17.792813   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:17.831837   61699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:17.831929   61699 ssh_runner.go:195] Run: which lz4
	I0924 01:04:17.836193   61699 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:17.840562   61699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:17.840596   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:04:15.871512   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:15.871540   61323 pod_ready.go:82] duration metric: took 2.006723245s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:15.871552   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879872   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:17.879899   61323 pod_ready.go:82] duration metric: took 2.008337801s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879918   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888007   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.888041   61323 pod_ready.go:82] duration metric: took 2.008114424s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888056   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894805   61323 pod_ready.go:93] pod "kube-proxy-qd4lg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.894844   61323 pod_ready.go:82] duration metric: took 6.779022ms for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894862   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900353   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.900387   61323 pod_ready.go:82] duration metric: took 5.513733ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900401   61323 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.262929   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .Start
	I0924 01:04:16.263123   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 01:04:16.264062   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 01:04:16.264543   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 01:04:16.264954   61989 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 01:04:16.265899   61989 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 01:04:17.566157   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 01:04:17.567223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.567644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.567724   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.567625   62886 retry.go:31] will retry after 301.652575ms: waiting for machine to come up
	I0924 01:04:17.871163   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.871700   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.871729   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.871645   62886 retry.go:31] will retry after 337.632324ms: waiting for machine to come up
	I0924 01:04:18.211081   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.211954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.212013   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.211892   62886 retry.go:31] will retry after 431.70455ms: waiting for machine to come up
	I0924 01:04:18.645408   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.646017   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.646044   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.645958   62886 retry.go:31] will retry after 582.966569ms: waiting for machine to come up
	I0924 01:04:19.230457   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.230954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.230980   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.230897   62886 retry.go:31] will retry after 720.62326ms: waiting for machine to come up
	I0924 01:04:19.953023   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.953570   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.953603   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.953512   62886 retry.go:31] will retry after 688.597177ms: waiting for machine to come up
	I0924 01:04:20.644150   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:20.644636   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:20.644672   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:20.644578   62886 retry.go:31] will retry after 1.084671138s: waiting for machine to come up
	I0924 01:04:19.165501   61699 crio.go:462] duration metric: took 1.329329949s to copy over tarball
	I0924 01:04:19.165575   61699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:21.323478   61699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157877766s)
	I0924 01:04:21.323509   61699 crio.go:469] duration metric: took 2.157979404s to extract the tarball
	I0924 01:04:21.323516   61699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:21.360397   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:21.401282   61699 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:21.401309   61699 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:21.401319   61699 kubeadm.go:934] updating node { 192.168.61.186 8444 v1.31.1 crio true true} ...
	I0924 01:04:21.401441   61699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-465341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:21.401524   61699 ssh_runner.go:195] Run: crio config
	I0924 01:04:21.447706   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:21.447730   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:21.447741   61699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:21.447766   61699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.186 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-465341 NodeName:default-k8s-diff-port-465341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:21.447939   61699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-465341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:21.448022   61699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:21.457882   61699 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:21.457967   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:21.467329   61699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 01:04:21.483464   61699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:21.500880   61699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 01:04:21.517179   61699 ssh_runner.go:195] Run: grep 192.168.61.186	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:21.521032   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:21.532339   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:21.655583   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:21.671964   61699 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341 for IP: 192.168.61.186
	I0924 01:04:21.672019   61699 certs.go:194] generating shared ca certs ...
	I0924 01:04:21.672044   61699 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:21.672273   61699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:21.672390   61699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:21.672409   61699 certs.go:256] generating profile certs ...
	I0924 01:04:21.672536   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.key
	I0924 01:04:21.672629   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key.b6f5ff18
	I0924 01:04:21.672696   61699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key
	I0924 01:04:21.672940   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:21.672987   61699 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:21.672999   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:21.673029   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:21.673060   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:21.673091   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:21.673133   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:21.673884   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:21.706165   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:21.735352   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:21.763358   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:21.786284   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 01:04:21.814844   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:21.839773   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:21.866549   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:21.889901   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:21.914875   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:21.939116   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:21.963264   61699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:21.980912   61699 ssh_runner.go:195] Run: openssl version
	I0924 01:04:21.986725   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:21.998128   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002832   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002903   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.008847   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:22.019274   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:22.030110   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035920   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035996   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.043505   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:22.057224   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:22.067596   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.071957   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.072029   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.077495   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:22.087627   61699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:22.092049   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:22.097908   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:22.103716   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:22.109871   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:22.116088   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:22.121760   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:22.127473   61699 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:22.127563   61699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:22.127613   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.167951   61699 cri.go:89] found id: ""
	I0924 01:04:22.168054   61699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:22.177878   61699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:22.177898   61699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:22.177949   61699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:22.187116   61699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:22.188577   61699 kubeconfig.go:125] found "default-k8s-diff-port-465341" server: "https://192.168.61.186:8444"
	I0924 01:04:22.191744   61699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:22.200936   61699 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.186
	I0924 01:04:22.200967   61699 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:22.200979   61699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:22.201039   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.247804   61699 cri.go:89] found id: ""
	I0924 01:04:22.247888   61699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:22.263853   61699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:22.273254   61699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:22.273271   61699 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:22.273327   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 01:04:22.281724   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:22.281790   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:22.290823   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 01:04:22.299422   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:22.299482   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:22.308961   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.317922   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:22.318010   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.326980   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 01:04:22.335995   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:22.336084   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:22.345002   61699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:22.354302   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:22.462157   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.380163   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.610795   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.679134   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.747119   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:23.747191   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:21.909834   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:24.104163   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:21.730823   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:21.731385   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:21.731411   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:21.731351   62886 retry.go:31] will retry after 1.051424847s: waiting for machine to come up
	I0924 01:04:22.784644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:22.785194   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:22.785223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:22.785138   62886 retry.go:31] will retry after 1.750498954s: waiting for machine to come up
	I0924 01:04:24.537680   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:24.538085   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:24.538109   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:24.538039   62886 retry.go:31] will retry after 2.015183238s: waiting for machine to come up
	I0924 01:04:24.247859   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:24.748076   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.248220   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.747481   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.774137   61699 api_server.go:72] duration metric: took 2.027016323s to wait for apiserver process to appear ...
	I0924 01:04:25.774167   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:25.774194   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:25.774901   61699 api_server.go:269] stopped: https://192.168.61.186:8444/healthz: Get "https://192.168.61.186:8444/healthz": dial tcp 192.168.61.186:8444: connect: connection refused
	I0924 01:04:26.275226   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.290581   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.290621   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.290637   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.321353   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.321386   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.775068   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.779873   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:28.779896   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:26.408349   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:28.409816   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:26.555221   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:26.555674   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:26.555695   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:26.555634   62886 retry.go:31] will retry after 2.568414115s: waiting for machine to come up
	I0924 01:04:29.127625   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:29.128130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:29.128149   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:29.128108   62886 retry.go:31] will retry after 2.207252231s: waiting for machine to come up
	I0924 01:04:29.275326   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.284304   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.284360   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:29.774975   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.779470   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.779503   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.275137   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.279256   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.279287   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.774874   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.779081   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.779110   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.275163   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.279417   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:31.279446   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.775022   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.780092   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:04:31.787643   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:31.787672   61699 api_server.go:131] duration metric: took 6.013498176s to wait for apiserver health ...
	I0924 01:04:31.787680   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:31.787686   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:31.789733   61699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:31.791140   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:31.801441   61699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:31.819890   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:31.828128   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:31.828160   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:31.828168   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:31.828177   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:31.828186   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:31.828191   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:04:31.828196   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:31.828200   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:31.828203   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:04:31.828209   61699 system_pods.go:74] duration metric: took 8.300337ms to wait for pod list to return data ...
	I0924 01:04:31.828215   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:31.831528   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:31.831550   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:31.831561   61699 node_conditions.go:105] duration metric: took 3.341719ms to run NodePressure ...
	I0924 01:04:31.831576   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:32.101590   61699 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105656   61699 kubeadm.go:739] kubelet initialised
	I0924 01:04:32.105679   61699 kubeadm.go:740] duration metric: took 4.062709ms waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105691   61699 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:32.110237   61699 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.115057   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115090   61699 pod_ready.go:82] duration metric: took 4.825694ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.115102   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115110   61699 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.119506   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119534   61699 pod_ready.go:82] duration metric: took 4.415876ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.119546   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119558   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.124199   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124248   61699 pod_ready.go:82] duration metric: took 4.660764ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.124266   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124285   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.223553   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223596   61699 pod_ready.go:82] duration metric: took 99.284751ms for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.223606   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223613   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.622500   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622527   61699 pod_ready.go:82] duration metric: took 398.907418ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.622538   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622545   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.023370   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023430   61699 pod_ready.go:82] duration metric: took 400.874003ms for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.023458   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023472   61699 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.422810   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422841   61699 pod_ready.go:82] duration metric: took 399.35051ms for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.422851   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422859   61699 pod_ready.go:39] duration metric: took 1.317159668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:33.422874   61699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:04:33.434449   61699 ops.go:34] apiserver oom_adj: -16
	I0924 01:04:33.434473   61699 kubeadm.go:597] duration metric: took 11.256568213s to restartPrimaryControlPlane
	I0924 01:04:33.434481   61699 kubeadm.go:394] duration metric: took 11.307014166s to StartCluster
	I0924 01:04:33.434501   61699 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.434571   61699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:33.436172   61699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.436515   61699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:04:33.436732   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:33.436686   61699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:04:33.436809   61699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436815   61699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436830   61699 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-465341"
	I0924 01:04:33.436832   61699 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436864   61699 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.436877   61699 addons.go:243] addon metrics-server should already be in state true
	I0924 01:04:33.436908   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	W0924 01:04:33.436842   61699 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:04:33.436935   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.436831   61699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-465341"
	I0924 01:04:33.437322   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437370   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437377   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437412   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437458   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437483   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.438259   61699 out.go:177] * Verifying Kubernetes components...
	I0924 01:04:33.439923   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:33.453108   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0924 01:04:33.453545   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I0924 01:04:33.453608   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.453916   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.454125   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454152   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454461   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454486   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454494   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.454806   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.455065   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455111   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.455360   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455404   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.456716   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0924 01:04:33.457163   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.457688   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.457727   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.458031   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.458242   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.461814   61699 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.461835   61699 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:04:33.461864   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.462230   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.462273   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.471783   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0924 01:04:33.472043   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0924 01:04:33.472300   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472550   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472858   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.472875   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.472994   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.473003   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.473234   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473366   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473413   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.473503   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.475140   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.475553   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.477287   61699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:04:33.477293   61699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:33.478708   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:04:33.478720   61699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:04:33.478737   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478836   61699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.478863   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:04:33.478889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478971   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0924 01:04:33.479636   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.480029   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.480041   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.480396   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.482306   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.482343   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.483280   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483373   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483769   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483873   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483892   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483958   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484111   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484236   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484255   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484413   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.484472   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484738   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484866   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.519981   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0924 01:04:33.520440   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.520996   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.521028   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.521497   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.521701   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.523331   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.523576   61699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.523591   61699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:04:33.523625   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.526668   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527211   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.527244   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527471   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.527702   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.527889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.528059   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.645903   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:33.663805   61699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:33.749720   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.751631   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:04:33.751649   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:04:33.755330   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.812231   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:04:33.812257   61699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:04:33.847216   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:33.847240   61699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:04:33.932057   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:34.781871   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026510893s)
	I0924 01:04:34.781939   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.781950   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.781887   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032127769s)
	I0924 01:04:34.782009   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782023   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782293   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782309   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782318   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782326   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782361   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782369   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782375   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782389   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782404   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782629   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782643   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782645   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782673   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782683   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.790740   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.790757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.790990   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.791010   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.791013   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.871488   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871516   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.871809   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.871826   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.871834   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871841   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.872103   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.872125   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.872117   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.872136   61699 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-465341"
	I0924 01:04:34.874133   61699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:04:30.907606   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:33.406280   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:31.337368   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:31.338025   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:31.338128   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:31.338011   62886 retry.go:31] will retry after 4.137847727s: waiting for machine to come up
	I0924 01:04:35.478410   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.478991   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.479016   61989 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 01:04:35.479029   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 01:04:35.479586   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.479607   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 01:04:35.479626   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | skip adding static IP to network mk-old-k8s-version-171598 - found existing host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"}
	I0924 01:04:35.479643   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 01:04:35.479659   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 01:04:35.482028   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482377   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.482419   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 01:04:35.482550   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 01:04:35.482585   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:35.482600   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 01:04:35.482614   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 01:04:35.613364   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:35.613847   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 01:04:35.614543   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:35.617366   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.617742   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.617774   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.618068   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:04:35.618260   61989 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:35.618279   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:35.618489   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.621130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621472   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.621497   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621722   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.621914   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622354   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.622558   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.622749   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.622760   61989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:35.736637   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:35.736661   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.736943   61989 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 01:04:35.736973   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.737151   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.739921   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740304   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.740362   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740502   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.740678   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740851   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.741218   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.741409   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.741423   61989 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 01:04:35.866963   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 01:04:35.866994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.870342   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.870860   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.870893   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.871145   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.871406   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871638   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871850   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.872050   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.872253   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.872276   61989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:36.717274   61070 start.go:364] duration metric: took 55.446152288s to acquireMachinesLock for "no-preload-674057"
	I0924 01:04:36.717335   61070 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:36.717344   61070 fix.go:54] fixHost starting: 
	I0924 01:04:36.717781   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:36.717821   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:36.739062   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0924 01:04:36.739602   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:36.740307   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:04:36.740366   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:36.740767   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:36.741058   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:36.741223   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:04:36.743313   61070 fix.go:112] recreateIfNeeded on no-preload-674057: state=Stopped err=<nil>
	I0924 01:04:36.743339   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	W0924 01:04:36.743512   61070 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:36.745694   61070 out.go:177] * Restarting existing kvm2 VM for "no-preload-674057" ...
	I0924 01:04:35.998933   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:35.998962   61989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:35.998983   61989 buildroot.go:174] setting up certificates
	I0924 01:04:35.998994   61989 provision.go:84] configureAuth start
	I0924 01:04:35.999005   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.999359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.002499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003027   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.003052   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003167   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.005508   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005773   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.005796   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005909   61989 provision.go:143] copyHostCerts
	I0924 01:04:36.005967   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:36.005986   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:36.006037   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:36.006129   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:36.006137   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:36.006156   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:36.006209   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:36.006216   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:36.006237   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:36.006310   61989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 01:04:36.084609   61989 provision.go:177] copyRemoteCerts
	I0924 01:04:36.084671   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:36.084698   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.087740   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088046   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.088075   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088278   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.088523   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.088716   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.088854   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.178597   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:36.202768   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:04:36.225933   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:36.250014   61989 provision.go:87] duration metric: took 251.005829ms to configureAuth
	I0924 01:04:36.250046   61989 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:36.250369   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:04:36.250453   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.253290   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.253912   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.253943   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.254242   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.254474   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254650   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254764   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.254958   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.255124   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.255138   61989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:36.472324   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:36.472381   61989 machine.go:96] duration metric: took 854.106776ms to provisionDockerMachine
	I0924 01:04:36.472401   61989 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 01:04:36.472419   61989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:36.472451   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.472814   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:36.472849   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.475567   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.475941   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.475969   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.476125   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.476403   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.476614   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.476831   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.562688   61989 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:36.566476   61989 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:36.566501   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:36.566561   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:36.566635   61989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:36.566724   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:36.576132   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:36.599696   61989 start.go:296] duration metric: took 127.276787ms for postStartSetup
	I0924 01:04:36.599738   61989 fix.go:56] duration metric: took 20.366477202s for fixHost
	I0924 01:04:36.599763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.603462   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.603836   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.603867   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.604057   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.604500   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604721   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604878   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.605041   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.605285   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.605303   61989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:36.717061   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139876.688490589
	
	I0924 01:04:36.717091   61989 fix.go:216] guest clock: 1727139876.688490589
	I0924 01:04:36.717102   61989 fix.go:229] Guest: 2024-09-24 01:04:36.688490589 +0000 UTC Remote: 2024-09-24 01:04:36.599742488 +0000 UTC m=+235.652611441 (delta=88.748101ms)
	I0924 01:04:36.717157   61989 fix.go:200] guest clock delta is within tolerance: 88.748101ms
	I0924 01:04:36.717165   61989 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 20.483937438s
	I0924 01:04:36.717199   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.717499   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.720466   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.720959   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.720986   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.721189   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721965   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.722073   61989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:36.722118   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.722187   61989 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:36.722215   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.725171   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725384   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725669   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.725694   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725858   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.725970   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.726016   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.726065   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726249   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.726254   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.726494   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726513   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.726657   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.727049   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.845385   61989 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:36.853307   61989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:37.001850   61989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:37.009873   61989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:37.009948   61989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:37.032269   61989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:37.032299   61989 start.go:495] detecting cgroup driver to use...
	I0924 01:04:37.032403   61989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:37.056250   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:37.072827   61989 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:37.072903   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:37.090639   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:37.107525   61989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:37.235495   61989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:37.410971   61989 docker.go:233] disabling docker service ...
	I0924 01:04:37.411034   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:37.427815   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:37.444121   61989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:37.568933   61989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:37.700008   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:37.715529   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:37.736908   61989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 01:04:37.736980   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.748540   61989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:37.748590   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.759301   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.771008   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.782080   61989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:37.793756   61989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:37.803444   61989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:37.803525   61989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:37.818012   61989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:37.829019   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:37.978885   61989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:38.086263   61989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:38.086353   61989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:38.093479   61989 start.go:563] Will wait 60s for crictl version
	I0924 01:04:38.093573   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:38.097486   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:38.138781   61989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:38.138872   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.166832   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.199764   61989 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 01:04:36.747491   61070 main.go:141] libmachine: (no-preload-674057) Calling .Start
	I0924 01:04:36.747705   61070 main.go:141] libmachine: (no-preload-674057) Ensuring networks are active...
	I0924 01:04:36.748694   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network default is active
	I0924 01:04:36.749079   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network mk-no-preload-674057 is active
	I0924 01:04:36.749656   61070 main.go:141] libmachine: (no-preload-674057) Getting domain xml...
	I0924 01:04:36.750535   61070 main.go:141] libmachine: (no-preload-674057) Creating domain...
	I0924 01:04:38.122450   61070 main.go:141] libmachine: (no-preload-674057) Waiting to get IP...
	I0924 01:04:38.123578   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.124107   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.124173   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.124079   63121 retry.go:31] will retry after 227.552582ms: waiting for machine to come up
	I0924 01:04:38.353724   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.354145   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.354169   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.354102   63121 retry.go:31] will retry after 322.483933ms: waiting for machine to come up
	I0924 01:04:38.678600   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.679091   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.679120   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.679041   63121 retry.go:31] will retry after 301.71366ms: waiting for machine to come up
	I0924 01:04:34.875511   61699 addons.go:510] duration metric: took 1.43884954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:04:35.671396   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:38.169131   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:35.907681   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.408396   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.201359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:38.204699   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205122   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:38.205152   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205408   61989 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:38.209456   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:38.222128   61989 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:38.222254   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:04:38.222300   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:38.276802   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:38.276864   61989 ssh_runner.go:195] Run: which lz4
	I0924 01:04:38.280989   61989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:38.285108   61989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:38.285138   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 01:04:39.903777   61989 crio.go:462] duration metric: took 1.62282331s to copy over tarball
	I0924 01:04:39.903900   61989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:38.982586   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.983239   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.983283   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.983219   63121 retry.go:31] will retry after 402.217062ms: waiting for machine to come up
	I0924 01:04:39.386903   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:39.387550   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:39.387578   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:39.387483   63121 retry.go:31] will retry after 734.565994ms: waiting for machine to come up
	I0924 01:04:40.123444   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.123910   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.123940   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.123870   63121 retry.go:31] will retry after 704.281941ms: waiting for machine to come up
	I0924 01:04:40.829666   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.830217   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.830275   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.830209   63121 retry.go:31] will retry after 1.068502434s: waiting for machine to come up
	I0924 01:04:41.900192   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:41.900739   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:41.900765   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:41.900691   63121 retry.go:31] will retry after 1.087234201s: waiting for machine to come up
	I0924 01:04:42.989622   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:42.990089   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:42.990117   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:42.990036   63121 retry.go:31] will retry after 1.269273138s: waiting for machine to come up
	I0924 01:04:39.168613   61699 node_ready.go:49] node "default-k8s-diff-port-465341" has status "Ready":"True"
	I0924 01:04:39.168638   61699 node_ready.go:38] duration metric: took 5.504799687s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:39.168650   61699 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:39.175830   61699 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182016   61699 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.182040   61699 pod_ready.go:82] duration metric: took 6.182193ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182052   61699 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188162   61699 pod_ready.go:93] pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.188191   61699 pod_ready.go:82] duration metric: took 6.130794ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188201   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196197   61699 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.196225   61699 pod_ready.go:82] duration metric: took 8.016123ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196238   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703747   61699 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.703776   61699 pod_ready.go:82] duration metric: took 1.507528182s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703791   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771262   61699 pod_ready.go:93] pod "kube-proxy-nf8mp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.771293   61699 pod_ready.go:82] duration metric: took 67.494606ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771307   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:42.778933   61699 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:40.908876   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:43.409650   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:42.944929   61989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040984911s)
	I0924 01:04:42.944969   61989 crio.go:469] duration metric: took 3.041152253s to extract the tarball
	I0924 01:04:42.944981   61989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:42.988315   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:43.036011   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:43.036045   61989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:43.036151   61989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.036194   61989 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 01:04:43.036211   61989 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.036281   61989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.036301   61989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.036344   61989 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.036310   61989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.036577   61989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038440   61989 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.038458   61989 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 01:04:43.038482   61989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.038502   61989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.038554   61989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.038588   61989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.038600   61989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038816   61989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.306768   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.309660   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 01:04:43.312684   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.314551   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.317719   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.326063   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.378736   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.405508   61989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 01:04:43.405585   61989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.405648   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.452908   61989 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 01:04:43.452954   61989 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 01:04:43.453006   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471293   61989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 01:04:43.471341   61989 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 01:04:43.471347   61989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.471370   61989 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.471297   61989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 01:04:43.471406   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471421   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471423   61989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.471462   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.494687   61989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 01:04:43.494735   61989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.494782   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508206   61989 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 01:04:43.508253   61989 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.508278   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.508298   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508363   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.508419   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.508451   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.508487   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.508547   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.645995   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.646039   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.646098   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.646152   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.646261   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.646337   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.646413   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817326   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.817416   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.817381   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.817508   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817449   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.817597   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.817686   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.972782   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.972792   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 01:04:43.972869   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 01:04:43.972838   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 01:04:43.972928   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 01:04:43.972944   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 01:04:43.973027   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 01:04:44.008191   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 01:04:44.220628   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:44.364297   61989 cache_images.go:92] duration metric: took 1.328227964s to LoadCachedImages
	W0924 01:04:44.364505   61989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0924 01:04:44.364539   61989 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 01:04:44.364681   61989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:44.364824   61989 ssh_runner.go:195] Run: crio config
	I0924 01:04:44.423360   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:04:44.423382   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:44.423393   61989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:44.423412   61989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:04:44.423593   61989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:44.423671   61989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:04:44.434069   61989 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:44.434143   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:44.443807   61989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 01:04:44.463473   61989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:44.480449   61989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 01:04:44.498520   61989 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:44.503034   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:44.516699   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:44.643090   61989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:44.660194   61989 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 01:04:44.660216   61989 certs.go:194] generating shared ca certs ...
	I0924 01:04:44.660234   61989 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:44.660454   61989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:44.660542   61989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:44.660559   61989 certs.go:256] generating profile certs ...
	I0924 01:04:44.660682   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 01:04:44.660755   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 01:04:44.660816   61989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 01:04:44.660976   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:44.661014   61989 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:44.661026   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:44.661071   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:44.661104   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:44.661133   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:44.661211   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:44.662130   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:44.710279   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:44.736824   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:44.773120   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:44.801137   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:04:44.844946   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:44.880871   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:44.908630   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:44.947148   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:44.971925   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:45.000519   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:45.034167   61989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:45.054932   61989 ssh_runner.go:195] Run: openssl version
	I0924 01:04:45.062733   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:45.076993   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082104   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082175   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.088219   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:45.099211   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:45.111178   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116551   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116624   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.122353   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:45.133490   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:45.144123   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150437   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150498   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.157127   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:45.168217   61989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:45.172865   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:45.179177   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:45.184987   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:45.190927   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:45.197134   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:45.203170   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:45.209550   61989 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:45.209721   61989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:45.209778   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.247564   61989 cri.go:89] found id: ""
	I0924 01:04:45.247635   61989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:45.258171   61989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:45.258195   61989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:45.258269   61989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:45.268247   61989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:45.269656   61989 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:45.270486   61989 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171598" cluster setting kubeconfig missing "old-k8s-version-171598" context setting]
	I0924 01:04:45.271918   61989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:45.277260   61989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:45.287239   61989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.3
	I0924 01:04:45.287271   61989 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:45.287281   61989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:45.287325   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.327991   61989 cri.go:89] found id: ""
	I0924 01:04:45.328071   61989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:45.344693   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:45.354414   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:45.354439   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:45.354499   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:45.363765   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:45.363838   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:45.373569   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:45.382401   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:45.382464   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:45.392710   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.402855   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:45.402919   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.413651   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:45.423818   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:45.423873   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:45.434138   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:45.444119   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:45.582409   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:44.261681   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:44.262330   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:44.262360   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:44.262274   63121 retry.go:31] will retry after 1.755704993s: waiting for machine to come up
	I0924 01:04:46.019761   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:46.020213   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:46.020242   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:46.020155   63121 retry.go:31] will retry after 2.038509067s: waiting for machine to come up
	I0924 01:04:48.060649   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:48.061170   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:48.061201   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:48.061122   63121 retry.go:31] will retry after 2.834284151s: waiting for machine to come up
	I0924 01:04:45.021172   61699 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:45.021200   61699 pod_ready.go:82] duration metric: took 4.249884358s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:45.021213   61699 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:47.028860   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:45.908530   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:48.407714   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:46.245754   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.511218   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.608877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.722521   61989 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:46.722607   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.222945   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.723437   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.223704   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.723517   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.223744   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.722691   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.222927   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.723331   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.897541   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:50.898047   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:50.898093   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:50.898018   63121 retry.go:31] will retry after 4.166792416s: waiting for machine to come up
	I0924 01:04:49.530215   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.027812   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:50.907425   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.907568   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:54.908623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:51.223525   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.722715   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.223281   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.723378   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.222798   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.722883   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.223279   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.723155   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.222994   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.723628   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.068642   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069305   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has current primary IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069330   61070 main.go:141] libmachine: (no-preload-674057) Found IP for machine: 192.168.50.161
	I0924 01:04:55.069339   61070 main.go:141] libmachine: (no-preload-674057) Reserving static IP address...
	I0924 01:04:55.070035   61070 main.go:141] libmachine: (no-preload-674057) Reserved static IP address: 192.168.50.161
	I0924 01:04:55.070065   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.070073   61070 main.go:141] libmachine: (no-preload-674057) Waiting for SSH to be available...
	I0924 01:04:55.070090   61070 main.go:141] libmachine: (no-preload-674057) DBG | skip adding static IP to network mk-no-preload-674057 - found existing host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"}
	I0924 01:04:55.070095   61070 main.go:141] libmachine: (no-preload-674057) DBG | Getting to WaitForSSH function...
	I0924 01:04:55.072715   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073106   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.073140   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073351   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH client type: external
	I0924 01:04:55.073379   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa (-rw-------)
	I0924 01:04:55.073405   61070 main.go:141] libmachine: (no-preload-674057) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:55.073444   61070 main.go:141] libmachine: (no-preload-674057) DBG | About to run SSH command:
	I0924 01:04:55.073462   61070 main.go:141] libmachine: (no-preload-674057) DBG | exit 0
	I0924 01:04:55.200585   61070 main.go:141] libmachine: (no-preload-674057) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:55.200980   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetConfigRaw
	I0924 01:04:55.201650   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.204919   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205340   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.205360   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205638   61070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 01:04:55.205881   61070 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:55.205903   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:55.206124   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.208572   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209012   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.209037   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209218   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.209499   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209693   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209832   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.210010   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.210249   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.210263   61070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:55.317027   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:55.317067   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317403   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:04:55.317441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317700   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.320886   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321301   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.321330   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321443   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.321643   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.321853   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.322010   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.322169   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.322343   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.322360   61070 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-674057 && echo "no-preload-674057" | sudo tee /etc/hostname
	I0924 01:04:55.439098   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-674057
	
	I0924 01:04:55.439134   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.441909   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442212   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.442256   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442430   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.442667   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.442890   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.443078   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.443301   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.443460   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.443474   61070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-674057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-674057/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-674057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:55.558172   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:55.558204   61070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:55.558225   61070 buildroot.go:174] setting up certificates
	I0924 01:04:55.558236   61070 provision.go:84] configureAuth start
	I0924 01:04:55.558248   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.558574   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.561503   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.561891   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.561917   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.562089   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.564426   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564800   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.564825   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564958   61070 provision.go:143] copyHostCerts
	I0924 01:04:55.565009   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:55.565018   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:55.565074   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:55.565167   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:55.565175   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:55.565194   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:55.565253   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:55.565263   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:55.565285   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:55.565372   61070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.no-preload-674057 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-674057]
	I0924 01:04:55.649690   61070 provision.go:177] copyRemoteCerts
	I0924 01:04:55.649750   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:55.649774   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.652790   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653249   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.653278   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653567   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.653772   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.653936   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.654059   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:55.738522   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:55.764045   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:04:55.788225   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:55.811207   61070 provision.go:87] duration metric: took 252.958643ms to configureAuth
	I0924 01:04:55.811233   61070 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:55.811415   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:55.811503   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.814921   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815366   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.815400   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815597   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.815826   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816039   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816212   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.816496   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.816740   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.816756   61070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:56.045600   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:56.045632   61070 machine.go:96] duration metric: took 839.736907ms to provisionDockerMachine
	I0924 01:04:56.045646   61070 start.go:293] postStartSetup for "no-preload-674057" (driver="kvm2")
	I0924 01:04:56.045660   61070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:56.045679   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.045997   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:56.046027   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.049081   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049522   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.049559   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049743   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.049960   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.050105   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.050245   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.136652   61070 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:56.140894   61070 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:56.140920   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:56.140987   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:56.141071   61070 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:56.141161   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:56.151170   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:56.179268   61070 start.go:296] duration metric: took 133.605527ms for postStartSetup
	I0924 01:04:56.179318   61070 fix.go:56] duration metric: took 19.461975001s for fixHost
	I0924 01:04:56.179344   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.182567   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.182902   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.182927   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.183091   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.183320   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183562   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183720   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.183865   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:56.184036   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:56.184045   61070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:56.289079   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139896.261476318
	
	I0924 01:04:56.289113   61070 fix.go:216] guest clock: 1727139896.261476318
	I0924 01:04:56.289121   61070 fix.go:229] Guest: 2024-09-24 01:04:56.261476318 +0000 UTC Remote: 2024-09-24 01:04:56.17932382 +0000 UTC m=+357.500342999 (delta=82.152498ms)
	I0924 01:04:56.289141   61070 fix.go:200] guest clock delta is within tolerance: 82.152498ms
	I0924 01:04:56.289156   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 19.57184993s
	I0924 01:04:56.289175   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.289441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:56.292799   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293122   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.293148   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293327   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293832   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293990   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.294073   61070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:56.294108   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.294271   61070 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:56.294299   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.296962   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297113   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297300   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297325   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297473   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297504   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297526   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297665   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297737   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297858   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297926   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.297968   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.298044   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.298139   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.373014   61070 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:56.412487   61070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:56.558755   61070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:56.565187   61070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:56.565245   61070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:56.582073   61070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:56.582102   61070 start.go:495] detecting cgroup driver to use...
	I0924 01:04:56.582167   61070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:56.597553   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:56.612515   61070 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:56.612564   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:56.627596   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:56.641619   61070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:56.762636   61070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:56.917742   61070 docker.go:233] disabling docker service ...
	I0924 01:04:56.917821   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:56.934585   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:56.949194   61070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:57.085465   61070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:57.230529   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:57.245369   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:57.265137   61070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:57.265196   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.276878   61070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:57.276936   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.288934   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.300690   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.312392   61070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:57.324491   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.335619   61070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.352868   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.363280   61070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:57.372811   61070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:57.372866   61070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:57.385797   61070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:57.395936   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:57.532086   61070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:57.628275   61070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:57.628370   61070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:57.633679   61070 start.go:563] Will wait 60s for crictl version
	I0924 01:04:57.633761   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:57.637574   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:57.679667   61070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:57.679756   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.707710   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.738651   61070 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:57.740120   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:57.743379   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.743783   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:57.743814   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.744048   61070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:57.748516   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:57.762723   61070 kubeadm.go:883] updating cluster {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:57.762864   61070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:57.762906   61070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:57.798232   61070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:57.798260   61070 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:57.798334   61070 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.798357   61070 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.798377   61070 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:57.798340   61070 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.798397   61070 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.798381   61070 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.799819   61070 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.799826   61070 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.799840   61070 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799893   61070 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 01:04:57.799902   61070 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.799903   61070 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.027261   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.028437   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 01:04:58.051940   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.082860   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.088073   61070 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 01:04:58.088121   61070 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.088190   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.095081   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.098388   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.152389   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.190893   61070 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 01:04:58.190920   61070 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 01:04:58.190934   61070 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.190944   61070 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.190984   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191029   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.190988   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191080   61070 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 01:04:58.191109   61070 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.191134   61070 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 01:04:58.191144   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191157   61070 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.191185   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219642   61070 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 01:04:58.219689   61070 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.219703   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.219729   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219741   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.219745   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.250341   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.250394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.320188   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.320222   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.320308   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.320394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.383126   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.383327   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.453833   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.453918   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.453878   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.453923   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.499994   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.500027   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 01:04:58.500119   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.583372   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 01:04:58.583491   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:04:58.586213   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 01:04:58.586281   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.586325   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:04:58.586328   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 01:04:58.586405   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:04:58.616022   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 01:04:58.616061   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 01:04:58.616082   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.616118   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 01:04:58.616131   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:04:58.616180   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 01:04:58.616128   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.647507   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 01:04:58.647576   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 01:04:58.647620   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 01:04:58.647659   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:04:54.527399   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.028355   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.407381   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:59.908596   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:56.222908   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.722701   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.222762   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.722814   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.222671   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.722746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.222961   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.723335   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.223393   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.722739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.003431   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815541   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.199297236s)
	I0924 01:05:00.815566   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.167859705s)
	I0924 01:05:00.815579   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 01:05:00.815599   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 01:05:00.815619   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815625   61070 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812143064s)
	I0924 01:05:00.815674   61070 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 01:05:00.815687   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815710   61070 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815750   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:05:02.782328   61070 ssh_runner.go:235] Completed: which crictl: (1.966554191s)
	I0924 01:05:02.782392   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.966688239s)
	I0924 01:05:02.782421   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 01:05:02.782445   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782497   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782404   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:59.529167   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.531324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.028305   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:02.407051   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.475255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.222765   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.722729   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.223407   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.722799   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.223381   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.723427   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.223157   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.723069   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.223400   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.723739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.773493   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.990910382s)
	I0924 01:05:04.773540   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.99101415s)
	I0924 01:05:04.773560   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 01:05:04.773577   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:04.773584   61070 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:04.773615   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:08.061466   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.287832238s)
	I0924 01:05:08.061499   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 01:05:08.061510   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.287911454s)
	I0924 01:05:08.061595   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:08.061520   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:08.061690   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:06.029255   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.527617   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.907268   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.907464   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.223395   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.723345   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.222965   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.722795   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.222933   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.723687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.223526   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.723684   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.223275   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.723534   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.041517   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.979809714s)
	I0924 01:05:10.041549   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 01:05:10.041577   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.979956931s)
	I0924 01:05:10.041625   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 01:05:10.041582   61070 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041714   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041727   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005649   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.963906504s)
	I0924 01:05:12.005689   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 01:05:12.005696   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963951454s)
	I0924 01:05:12.005720   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 01:05:12.005727   61070 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005768   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.960728   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 01:05:12.960771   61070 cache_images.go:123] Successfully loaded all cached images
	I0924 01:05:12.960778   61070 cache_images.go:92] duration metric: took 15.162496206s to LoadCachedImages
	I0924 01:05:12.960791   61070 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.1 crio true true} ...
	I0924 01:05:12.960931   61070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-674057 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:05:12.961013   61070 ssh_runner.go:195] Run: crio config
	I0924 01:05:13.006511   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:13.006535   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:13.006551   61070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:05:13.006579   61070 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-674057 NodeName:no-preload-674057 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:05:13.006729   61070 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-674057"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:05:13.006799   61070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:05:13.017598   61070 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:05:13.017672   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:05:13.027414   61070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 01:05:13.044688   61070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:05:13.061646   61070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 01:05:13.079552   61070 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0924 01:05:13.083172   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:05:13.095232   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:05:13.207184   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:05:13.222851   61070 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057 for IP: 192.168.50.161
	I0924 01:05:13.222880   61070 certs.go:194] generating shared ca certs ...
	I0924 01:05:13.222901   61070 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:05:13.223084   61070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:05:13.223184   61070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:05:13.223195   61070 certs.go:256] generating profile certs ...
	I0924 01:05:13.223314   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.key
	I0924 01:05:13.223394   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key.8fa8fb95
	I0924 01:05:13.223445   61070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key
	I0924 01:05:13.223614   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:05:13.223654   61070 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:05:13.223710   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:05:13.223756   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:05:13.223785   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:05:13.223818   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:05:13.223862   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:05:13.224549   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:05:13.273224   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:05:13.311069   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:05:13.342314   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:05:13.369345   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:05:13.395466   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:05:13.424307   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:05:13.448531   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:05:13.472491   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:05:13.496060   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:05:13.521182   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:05:13.548194   61070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:05:13.566423   61070 ssh_runner.go:195] Run: openssl version
	I0924 01:05:13.572605   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:05:13.583991   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588705   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588771   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.594828   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:05:13.606168   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:05:13.617723   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622697   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622762   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.628486   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:05:13.639176   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:05:13.650161   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654546   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654625   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.660382   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:05:13.671487   61070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:05:13.676226   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:05:13.682591   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:05:13.688492   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:05:13.694726   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:05:13.700432   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:05:13.706080   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:05:13.712226   61070 kubeadm.go:392] StartCluster: {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:05:13.712323   61070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:05:13.712421   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:11.028779   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.527996   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:10.908227   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.408515   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:11.223272   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.723442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.223301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.723151   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.223174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.722780   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.222777   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.722987   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.223654   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.723449   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.757518   61070 cri.go:89] found id: ""
	I0924 01:05:13.757597   61070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:05:13.768318   61070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:05:13.768367   61070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:05:13.768416   61070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:05:13.778918   61070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:05:13.780385   61070 kubeconfig.go:125] found "no-preload-674057" server: "https://192.168.50.161:8443"
	I0924 01:05:13.783392   61070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:05:13.794016   61070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0924 01:05:13.794050   61070 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:05:13.794085   61070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:05:13.794150   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:13.833511   61070 cri.go:89] found id: ""
	I0924 01:05:13.833596   61070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:05:13.851608   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:05:13.861469   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:05:13.861510   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:05:13.861552   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:05:13.870700   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:05:13.870770   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:05:13.880613   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:05:13.890336   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:05:13.890404   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:05:13.900172   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.910408   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:05:13.910475   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.919980   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:05:13.929398   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:05:13.929495   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:05:13.938894   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:05:13.948749   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:14.056463   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.345268   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.288763261s)
	I0924 01:05:15.345317   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.555986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.626986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.697665   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:05:15.697761   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.198410   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.698860   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.715727   61070 api_server.go:72] duration metric: took 1.018058771s to wait for apiserver process to appear ...
	I0924 01:05:16.715756   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:05:16.715779   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:15.528157   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.528680   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:15.906930   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.907223   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:16.223623   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.723625   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.223541   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.722702   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.222919   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.722982   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.222978   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.723547   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.223112   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.723562   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.716809   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:21.716852   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:19.528769   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.028695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:20.406693   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.407036   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:24.906735   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:21.223058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.722680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.223693   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.722716   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.223387   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.722910   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.223608   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.723144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.223442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.723025   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.717768   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:26.717811   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:24.527568   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.527806   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.028455   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:27.406994   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.906590   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.723271   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.223163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.723283   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.723174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.222803   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.723029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.223679   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.723058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.718277   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:31.718317   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:31.028690   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:33.527675   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.906723   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:34.406306   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.223465   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.723438   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.223673   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.722674   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.223289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.723651   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.223014   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.723518   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.222860   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.723642   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.718676   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:36.718716   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.146737   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": read tcp 192.168.50.1:59880->192.168.50.161:8443: read: connection reset by peer
	I0924 01:05:37.215865   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.216506   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:37.716052   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.716731   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:38.216296   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:36.028537   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.032544   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.406928   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.407201   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.222680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.723015   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.222736   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.723185   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.223070   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.723237   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.223640   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.723622   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.222705   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.722909   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.217518   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:43.217557   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:40.527577   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:43.027715   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:40.906522   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:42.906906   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:44.907623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:41.223105   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.723166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.223286   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.723048   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.223278   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.723301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.222712   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.723191   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.223720   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.723044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:48.217915   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:48.217982   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:45.028780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.028883   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.406680   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:49.907776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:46.223270   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.722902   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:46.722980   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:46.781519   61989 cri.go:89] found id: ""
	I0924 01:05:46.781551   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.781565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:46.781574   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:46.781630   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:46.815990   61989 cri.go:89] found id: ""
	I0924 01:05:46.816021   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.816030   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:46.816035   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:46.816082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:46.848951   61989 cri.go:89] found id: ""
	I0924 01:05:46.848980   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.848989   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:46.848995   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:46.849062   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:46.880731   61989 cri.go:89] found id: ""
	I0924 01:05:46.880756   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.880764   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:46.880770   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:46.880832   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:46.915975   61989 cri.go:89] found id: ""
	I0924 01:05:46.916004   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.916014   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:46.916036   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:46.916105   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:46.954124   61989 cri.go:89] found id: ""
	I0924 01:05:46.954154   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.954162   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:46.954168   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:46.954233   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:46.990454   61989 cri.go:89] found id: ""
	I0924 01:05:46.990489   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.990498   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:46.990504   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:46.990573   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:47.024099   61989 cri.go:89] found id: ""
	I0924 01:05:47.024137   61989 logs.go:276] 0 containers: []
	W0924 01:05:47.024150   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:47.024161   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:47.024176   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:47.153050   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:47.153076   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:47.153109   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:47.223472   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:47.223511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:47.267699   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:47.267729   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:47.314741   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:47.314773   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:49.828972   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:49.842301   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:49.842378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:49.874632   61989 cri.go:89] found id: ""
	I0924 01:05:49.874659   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.874669   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:49.874676   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:49.874734   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:49.912500   61989 cri.go:89] found id: ""
	I0924 01:05:49.912524   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.912532   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:49.912543   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:49.912592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:49.947297   61989 cri.go:89] found id: ""
	I0924 01:05:49.947320   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.947328   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:49.947334   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:49.947395   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:49.983863   61989 cri.go:89] found id: ""
	I0924 01:05:49.983892   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.983905   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:49.983915   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:49.983977   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:50.022997   61989 cri.go:89] found id: ""
	I0924 01:05:50.023031   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.023044   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:50.023053   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:50.023109   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:50.057829   61989 cri.go:89] found id: ""
	I0924 01:05:50.057863   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.057875   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:50.057882   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:50.057929   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:50.114599   61989 cri.go:89] found id: ""
	I0924 01:05:50.114620   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.114628   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:50.114633   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:50.114677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:50.147294   61989 cri.go:89] found id: ""
	I0924 01:05:50.147326   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.147334   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:50.147345   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:50.147378   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:50.198362   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:50.198402   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:50.212381   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:50.212415   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:50.286216   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:50.286261   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:50.286279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:50.366794   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:50.366827   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:53.218617   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:53.218653   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:49.527980   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.027425   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.027780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:51.908078   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.406891   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.908167   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:52.922279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:52.922353   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:52.956677   61989 cri.go:89] found id: ""
	I0924 01:05:52.956708   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.956720   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:52.956727   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:52.956778   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:52.990933   61989 cri.go:89] found id: ""
	I0924 01:05:52.990956   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.990964   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:52.990970   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:52.991019   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:53.025729   61989 cri.go:89] found id: ""
	I0924 01:05:53.025758   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.025768   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:53.025778   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:53.025838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:53.060238   61989 cri.go:89] found id: ""
	I0924 01:05:53.060269   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.060279   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:53.060287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:53.060366   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:53.094166   61989 cri.go:89] found id: ""
	I0924 01:05:53.094200   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.094212   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:53.094220   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:53.094289   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:53.129857   61989 cri.go:89] found id: ""
	I0924 01:05:53.129884   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.129892   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:53.129898   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:53.129955   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:53.165857   61989 cri.go:89] found id: ""
	I0924 01:05:53.165890   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.165898   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:53.165909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:53.165970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:53.203884   61989 cri.go:89] found id: ""
	I0924 01:05:53.203909   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.203917   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:53.203926   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:53.203937   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:53.258001   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:53.258035   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:53.271584   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:53.271620   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:53.341791   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:53.341811   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:53.341824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:53.424126   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:53.424170   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:55.962067   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:55.977964   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:55.978042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:56.277329   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:05:56.277366   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:05:56.277385   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.302576   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.302628   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:56.715873   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.722458   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.722487   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.216714   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.224426   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:57.224474   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.715976   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.725067   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:05:57.734749   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:05:57.734782   61070 api_server.go:131] duration metric: took 41.019017744s to wait for apiserver health ...
	I0924 01:05:57.734793   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:57.734801   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:57.736798   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:05:57.738285   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:05:57.750654   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:05:57.778587   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:05:57.804858   61070 system_pods.go:59] 8 kube-system pods found
	I0924 01:05:57.804907   61070 system_pods.go:61] "coredns-7c65d6cfc9-kshwz" [4393c6ec-abd9-42ce-af67-9e8b768bd49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:05:57.804917   61070 system_pods.go:61] "etcd-no-preload-674057" [65cf3acb-8ffa-4f83-8ab9-86ddefc5d829] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:05:57.804932   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [7d26a065-faa1-4ba2-96b7-6c9b1ccb5386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:05:57.804940   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [7c5c6602-1749-4f34-bb63-08161baac6db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:05:57.804949   61070 system_pods.go:61] "kube-proxy-fgmwc" [a81419dc-54f5-4bdd-ac2d-f3f7c85b8f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 01:05:57.804955   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [d02c8d9a-1897-4506-8029-9608f11520de] Running
	I0924 01:05:57.804965   61070 system_pods.go:61] "metrics-server-6867b74b74-7gbnr" [6ffa0eb7-21d8-4741-9eae-ce7bb9604dec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:05:57.804975   61070 system_pods.go:61] "storage-provisioner" [a7f99914-8945-4614-afef-d553ea932edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 01:05:57.804984   61070 system_pods.go:74] duration metric: took 26.369156ms to wait for pod list to return data ...
	I0924 01:05:57.804996   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:05:57.809068   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:05:57.809103   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:05:57.809119   61070 node_conditions.go:105] duration metric: took 4.115654ms to run NodePressure ...
	I0924 01:05:57.809137   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:58.173276   61070 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178398   61070 kubeadm.go:739] kubelet initialised
	I0924 01:05:58.178422   61070 kubeadm.go:740] duration metric: took 5.118555ms waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178429   61070 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:05:58.183646   61070 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:05:56.029030   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.029256   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.407889   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.907744   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.014681   61989 cri.go:89] found id: ""
	I0924 01:05:56.014716   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.014728   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:56.014736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:56.014799   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:56.062547   61989 cri.go:89] found id: ""
	I0924 01:05:56.062576   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.062587   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:56.062606   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:56.062665   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:56.100938   61989 cri.go:89] found id: ""
	I0924 01:05:56.100960   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.100969   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:56.100974   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:56.101039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:56.137694   61989 cri.go:89] found id: ""
	I0924 01:05:56.137722   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.137737   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:56.137744   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:56.137803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:56.174876   61989 cri.go:89] found id: ""
	I0924 01:05:56.174911   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.174923   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:56.174931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:56.174990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:56.208870   61989 cri.go:89] found id: ""
	I0924 01:05:56.208895   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.208905   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:56.208913   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:56.208971   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:56.242476   61989 cri.go:89] found id: ""
	I0924 01:05:56.242508   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.242520   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:56.242528   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:56.242590   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:56.276185   61989 cri.go:89] found id: ""
	I0924 01:05:56.276214   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.276255   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:56.276267   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:56.276284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:56.332755   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:56.332792   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:56.346279   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:56.346312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:56.419725   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:56.419751   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:56.419766   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:56.500173   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:56.500208   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.083761   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:59.097184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:59.097247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:59.131734   61989 cri.go:89] found id: ""
	I0924 01:05:59.131764   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.131775   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:59.131782   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:59.131842   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:59.169402   61989 cri.go:89] found id: ""
	I0924 01:05:59.169429   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.169439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:59.169446   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:59.169521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:59.208235   61989 cri.go:89] found id: ""
	I0924 01:05:59.208260   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.208290   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:59.208298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:59.208372   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:59.242314   61989 cri.go:89] found id: ""
	I0924 01:05:59.242345   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.242358   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:59.242367   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:59.242433   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:59.281300   61989 cri.go:89] found id: ""
	I0924 01:05:59.281327   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.281337   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:59.281344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:59.281407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:59.315336   61989 cri.go:89] found id: ""
	I0924 01:05:59.315369   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.315377   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:59.315386   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:59.315445   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:59.347678   61989 cri.go:89] found id: ""
	I0924 01:05:59.347708   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.347718   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:59.347726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:59.347786   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:59.381296   61989 cri.go:89] found id: ""
	I0924 01:05:59.381328   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.381340   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:59.381352   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:59.381369   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:59.462939   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:59.462971   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:59.462990   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:59.544967   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:59.545004   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.585079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:59.585106   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:59.637897   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:59.637940   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:00.190924   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.192627   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.192648   61070 pod_ready.go:82] duration metric: took 4.008971718s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.192658   61070 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198586   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.198614   61070 pod_ready.go:82] duration metric: took 5.949433ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198627   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205306   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:03.205331   61070 pod_ready.go:82] duration metric: took 1.006696778s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205342   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:00.528770   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.529473   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:01.406620   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:03.407024   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.153289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:02.170582   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:02.170679   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:02.216700   61989 cri.go:89] found id: ""
	I0924 01:06:02.216722   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.216730   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:02.216736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:02.216793   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:02.292664   61989 cri.go:89] found id: ""
	I0924 01:06:02.292695   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.292706   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:02.292714   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:02.292780   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:02.349447   61989 cri.go:89] found id: ""
	I0924 01:06:02.349470   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.349481   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:02.349487   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:02.349557   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:02.390491   61989 cri.go:89] found id: ""
	I0924 01:06:02.390514   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.390535   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:02.390543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:02.390597   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:02.439330   61989 cri.go:89] found id: ""
	I0924 01:06:02.439355   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.439366   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:02.439373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:02.439432   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:02.476400   61989 cri.go:89] found id: ""
	I0924 01:06:02.476431   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.476439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:02.476445   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:02.476501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:02.511946   61989 cri.go:89] found id: ""
	I0924 01:06:02.511975   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.511983   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:02.511989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:02.512036   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:02.547526   61989 cri.go:89] found id: ""
	I0924 01:06:02.547554   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.547561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:02.547570   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:02.547580   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:02.619784   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:02.619805   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:02.619816   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:02.698597   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:02.698636   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:02.741381   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:02.741419   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:02.797965   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:02.798023   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.312059   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:05.326556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:05.326614   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:05.360973   61989 cri.go:89] found id: ""
	I0924 01:06:05.360999   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.361011   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:05.361018   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:05.361101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:05.394720   61989 cri.go:89] found id: ""
	I0924 01:06:05.394750   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.394760   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:05.394767   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:05.394831   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:05.432564   61989 cri.go:89] found id: ""
	I0924 01:06:05.432592   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.432603   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:05.432611   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:05.432673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:05.465424   61989 cri.go:89] found id: ""
	I0924 01:06:05.465467   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.465478   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:05.465484   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:05.465555   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:05.503656   61989 cri.go:89] found id: ""
	I0924 01:06:05.503684   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.503693   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:05.503699   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:05.503752   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:05.538128   61989 cri.go:89] found id: ""
	I0924 01:06:05.538160   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.538171   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:05.538179   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:05.538248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:05.571310   61989 cri.go:89] found id: ""
	I0924 01:06:05.571336   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.571346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:05.571353   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:05.571416   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:05.604038   61989 cri.go:89] found id: ""
	I0924 01:06:05.604062   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.604070   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:05.604079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:05.604094   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:05.657025   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:05.657068   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.671457   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:05.671483   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:05.747671   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:05.747701   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:05.747718   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:05.833248   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:05.833285   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:05.212622   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.711612   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.028130   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.527525   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.407057   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.407341   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.906549   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:08.372029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:08.386497   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:08.386564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:08.422998   61989 cri.go:89] found id: ""
	I0924 01:06:08.423029   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.423039   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:08.423047   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:08.423095   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:08.457009   61989 cri.go:89] found id: ""
	I0924 01:06:08.457037   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.457047   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:08.457052   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:08.457104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:08.489694   61989 cri.go:89] found id: ""
	I0924 01:06:08.489728   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.489740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:08.489750   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:08.489804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:08.521819   61989 cri.go:89] found id: ""
	I0924 01:06:08.521845   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.521856   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:08.521864   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:08.521922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:08.556422   61989 cri.go:89] found id: ""
	I0924 01:06:08.556453   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.556465   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:08.556472   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:08.556567   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:08.593802   61989 cri.go:89] found id: ""
	I0924 01:06:08.593828   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.593836   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:08.593842   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:08.593932   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:08.627569   61989 cri.go:89] found id: ""
	I0924 01:06:08.627592   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.627600   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:08.627605   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:08.627653   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:08.664728   61989 cri.go:89] found id: ""
	I0924 01:06:08.664758   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.664769   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:08.664780   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:08.664794   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.703546   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:08.703577   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:08.755612   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:08.755649   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:08.769957   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:08.769989   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:08.842732   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:08.842762   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:08.842789   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:10.211942   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.211973   61070 pod_ready.go:82] duration metric: took 7.006623705s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.211986   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217219   61070 pod_ready.go:93] pod "kube-proxy-fgmwc" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.217247   61070 pod_ready.go:82] duration metric: took 5.254551ms for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217260   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221959   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.221983   61070 pod_ready.go:82] duration metric: took 4.71607ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221996   61070 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:12.227911   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.527831   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.527917   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.028599   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.907394   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.407242   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.427424   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:11.440709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:11.440773   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:11.475537   61989 cri.go:89] found id: ""
	I0924 01:06:11.475564   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.475572   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:11.475577   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:11.475633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:11.512231   61989 cri.go:89] found id: ""
	I0924 01:06:11.512276   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.512285   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:11.512292   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:11.512365   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:11.549809   61989 cri.go:89] found id: ""
	I0924 01:06:11.549840   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.549852   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:11.549858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:11.549924   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:11.587451   61989 cri.go:89] found id: ""
	I0924 01:06:11.587481   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.587493   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:11.587500   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:11.587558   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:11.625109   61989 cri.go:89] found id: ""
	I0924 01:06:11.625135   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.625146   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:11.625154   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:11.625213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:11.660577   61989 cri.go:89] found id: ""
	I0924 01:06:11.660604   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.660616   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:11.660624   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:11.660683   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:11.703527   61989 cri.go:89] found id: ""
	I0924 01:06:11.703557   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.703569   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:11.703577   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:11.703646   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:11.740766   61989 cri.go:89] found id: ""
	I0924 01:06:11.740798   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.740810   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:11.740820   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:11.740836   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:11.803402   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:11.803448   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:11.819144   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:11.819178   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:11.896152   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:11.896173   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:11.896187   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.986284   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:11.986340   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.523669   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:14.537923   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:14.537990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:14.576092   61989 cri.go:89] found id: ""
	I0924 01:06:14.576128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.576140   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:14.576148   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:14.576213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:14.611985   61989 cri.go:89] found id: ""
	I0924 01:06:14.612020   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.612032   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:14.612039   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:14.612098   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:14.647640   61989 cri.go:89] found id: ""
	I0924 01:06:14.647667   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.647675   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:14.647682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:14.647746   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:14.685089   61989 cri.go:89] found id: ""
	I0924 01:06:14.685128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.685141   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:14.685150   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:14.685217   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:14.718694   61989 cri.go:89] found id: ""
	I0924 01:06:14.718729   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.718738   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:14.718745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:14.718810   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:14.754874   61989 cri.go:89] found id: ""
	I0924 01:06:14.754916   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.754928   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:14.754936   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:14.754993   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:14.789580   61989 cri.go:89] found id: ""
	I0924 01:06:14.789608   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.789617   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:14.789625   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:14.789677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:14.823173   61989 cri.go:89] found id: ""
	I0924 01:06:14.823201   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.823213   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:14.823224   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:14.823238   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:14.878398   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:14.878431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:14.892466   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:14.892502   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:14.965978   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:14.966010   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:14.966065   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:15.050557   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:15.050600   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.231644   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.728219   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.029325   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:18.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.907014   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:19.406893   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:17.596915   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:17.609585   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:17.609643   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:17.648275   61989 cri.go:89] found id: ""
	I0924 01:06:17.648305   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.648313   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:17.648319   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:17.648447   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:17.681447   61989 cri.go:89] found id: ""
	I0924 01:06:17.681473   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.681484   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:17.681491   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:17.681552   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:17.719202   61989 cri.go:89] found id: ""
	I0924 01:06:17.719226   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.719234   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:17.719240   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:17.719296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:17.752601   61989 cri.go:89] found id: ""
	I0924 01:06:17.752629   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.752641   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:17.752649   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:17.752700   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:17.789905   61989 cri.go:89] found id: ""
	I0924 01:06:17.789934   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.789945   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:17.789952   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:17.790015   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:17.824174   61989 cri.go:89] found id: ""
	I0924 01:06:17.824205   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.824217   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:17.824237   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:17.824296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:17.860647   61989 cri.go:89] found id: ""
	I0924 01:06:17.860674   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.860684   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:17.860691   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:17.860750   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:17.896392   61989 cri.go:89] found id: ""
	I0924 01:06:17.896414   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.896423   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:17.896437   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:17.896450   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:17.949230   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:17.949272   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:17.963125   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:17.963183   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:18.035092   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:18.035117   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:18.035134   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:18.117973   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:18.118011   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:20.657044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:20.669862   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:20.669936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:20.704672   61989 cri.go:89] found id: ""
	I0924 01:06:20.704703   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.704714   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:20.704722   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:20.704785   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:20.745777   61989 cri.go:89] found id: ""
	I0924 01:06:20.745801   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.745811   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:20.745818   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:20.745879   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:20.779673   61989 cri.go:89] found id: ""
	I0924 01:06:20.779704   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.779740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:20.779749   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:20.779809   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:20.815959   61989 cri.go:89] found id: ""
	I0924 01:06:20.815983   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.815992   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:20.815998   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:20.816055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:20.849203   61989 cri.go:89] found id: ""
	I0924 01:06:20.849232   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.849243   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:20.849251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:20.849319   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:20.884303   61989 cri.go:89] found id: ""
	I0924 01:06:20.884353   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.884365   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:20.884373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:20.884436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:20.921217   61989 cri.go:89] found id: ""
	I0924 01:06:20.921242   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.921249   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:20.921255   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:20.921302   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:20.957555   61989 cri.go:89] found id: ""
	I0924 01:06:20.957590   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.957601   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:20.957613   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:20.957628   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:20.972591   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:20.972630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:06:18.728553   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.730046   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.228040   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.527573   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:22.527695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:21.406963   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.907730   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	W0924 01:06:21.046506   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:21.046532   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:21.046547   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:21.129415   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:21.129453   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:21.168899   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:21.168924   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:23.720925   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:23.736893   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:23.736965   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:23.771874   61989 cri.go:89] found id: ""
	I0924 01:06:23.771901   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.771909   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:23.771915   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:23.771976   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:23.806892   61989 cri.go:89] found id: ""
	I0924 01:06:23.806924   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.806936   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:23.806943   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:23.806999   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:23.843661   61989 cri.go:89] found id: ""
	I0924 01:06:23.843686   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.843694   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:23.843700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:23.843753   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:23.878979   61989 cri.go:89] found id: ""
	I0924 01:06:23.879007   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.879019   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:23.879027   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:23.879086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:23.913893   61989 cri.go:89] found id: ""
	I0924 01:06:23.913916   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.913925   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:23.913937   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:23.913982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:23.947932   61989 cri.go:89] found id: ""
	I0924 01:06:23.947961   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.947972   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:23.947980   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:23.948045   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:23.981366   61989 cri.go:89] found id: ""
	I0924 01:06:23.981391   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.981402   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:23.981409   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:23.981467   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:24.014428   61989 cri.go:89] found id: ""
	I0924 01:06:24.014455   61989 logs.go:276] 0 containers: []
	W0924 01:06:24.014463   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:24.014471   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:24.014485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:24.029585   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:24.029621   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:24.095926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:24.095955   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:24.095975   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:24.174594   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:24.174635   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:24.213286   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:24.213311   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:25.229785   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.729021   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:25.027783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.030450   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.406776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:28.907135   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.764740   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:26.777184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:26.777279   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:26.812704   61989 cri.go:89] found id: ""
	I0924 01:06:26.812735   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.812746   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:26.812753   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:26.812811   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:26.849867   61989 cri.go:89] found id: ""
	I0924 01:06:26.849895   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.849904   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:26.849909   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:26.849958   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:26.882856   61989 cri.go:89] found id: ""
	I0924 01:06:26.882878   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.882885   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:26.882891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:26.882936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:26.921063   61989 cri.go:89] found id: ""
	I0924 01:06:26.921085   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.921094   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:26.921100   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:26.921156   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:26.961154   61989 cri.go:89] found id: ""
	I0924 01:06:26.961182   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.961194   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:26.961200   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:26.961257   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:26.994560   61989 cri.go:89] found id: ""
	I0924 01:06:26.994593   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.994603   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:26.994612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:26.994673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:27.027967   61989 cri.go:89] found id: ""
	I0924 01:06:27.028013   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.028026   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:27.028033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:27.028096   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:27.063099   61989 cri.go:89] found id: ""
	I0924 01:06:27.063130   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.063142   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:27.063153   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:27.063166   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:27.116237   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:27.116279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:27.130785   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:27.130815   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:27.201931   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:27.201954   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:27.201970   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:27.282182   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:27.282217   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:29.825403   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:29.838890   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:29.838989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:29.873651   61989 cri.go:89] found id: ""
	I0924 01:06:29.873678   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.873690   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:29.873698   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:29.873758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:29.909894   61989 cri.go:89] found id: ""
	I0924 01:06:29.909916   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.909923   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:29.909929   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:29.909978   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:29.944850   61989 cri.go:89] found id: ""
	I0924 01:06:29.944878   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.944886   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:29.944892   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:29.944945   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:29.981486   61989 cri.go:89] found id: ""
	I0924 01:06:29.981515   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.981524   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:29.981532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:29.981592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:30.015138   61989 cri.go:89] found id: ""
	I0924 01:06:30.015165   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.015176   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:30.015184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:30.015256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:30.051777   61989 cri.go:89] found id: ""
	I0924 01:06:30.051814   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.051825   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:30.051834   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:30.051898   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:30.085573   61989 cri.go:89] found id: ""
	I0924 01:06:30.085598   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.085607   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:30.085612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:30.085661   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:30.122518   61989 cri.go:89] found id: ""
	I0924 01:06:30.122551   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.122561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:30.122570   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:30.122585   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:30.199075   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:30.199118   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:30.238259   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:30.238293   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:30.292145   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:30.292185   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:30.306404   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:30.306431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:30.373959   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:29.729379   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.228691   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:29.527089   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:31.527523   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:34.027357   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:30.907575   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:33.407615   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.875041   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:32.888358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:32.888435   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:32.924466   61989 cri.go:89] found id: ""
	I0924 01:06:32.924499   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.924519   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:32.924528   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:32.924584   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:32.960188   61989 cri.go:89] found id: ""
	I0924 01:06:32.960216   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.960224   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:32.960231   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:32.960282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:32.997612   61989 cri.go:89] found id: ""
	I0924 01:06:32.997641   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.997649   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:32.997655   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:32.997704   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:33.034282   61989 cri.go:89] found id: ""
	I0924 01:06:33.034310   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.034317   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:33.034325   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:33.034381   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:33.073832   61989 cri.go:89] found id: ""
	I0924 01:06:33.073861   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.073870   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:33.073875   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:33.073959   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:33.107276   61989 cri.go:89] found id: ""
	I0924 01:06:33.107303   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.107314   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:33.107323   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:33.107373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:33.141062   61989 cri.go:89] found id: ""
	I0924 01:06:33.141091   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.141104   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:33.141112   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:33.141174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:33.177874   61989 cri.go:89] found id: ""
	I0924 01:06:33.177899   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.177908   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:33.177916   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:33.177927   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:33.228324   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:33.228373   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:33.241324   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:33.241350   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:33.313115   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:33.313139   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:33.313151   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:33.392458   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:33.392512   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:35.932822   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:35.945918   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:35.945987   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:34.727948   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.728560   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.028536   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:38.527308   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.906501   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:37.907165   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.984400   61989 cri.go:89] found id: ""
	I0924 01:06:35.984438   61989 logs.go:276] 0 containers: []
	W0924 01:06:35.984448   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:35.984456   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:35.984528   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:36.022208   61989 cri.go:89] found id: ""
	I0924 01:06:36.022235   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.022244   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:36.022252   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:36.022336   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:36.059153   61989 cri.go:89] found id: ""
	I0924 01:06:36.059176   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.059184   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:36.059190   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:36.059247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:36.094375   61989 cri.go:89] found id: ""
	I0924 01:06:36.094413   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.094425   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:36.094434   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:36.094490   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:36.128662   61989 cri.go:89] found id: ""
	I0924 01:06:36.128691   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.128702   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:36.128710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:36.128762   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:36.160898   61989 cri.go:89] found id: ""
	I0924 01:06:36.160925   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.160937   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:36.160945   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:36.161010   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:36.194421   61989 cri.go:89] found id: ""
	I0924 01:06:36.194448   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.194460   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:36.194468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:36.194537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:36.230448   61989 cri.go:89] found id: ""
	I0924 01:06:36.230477   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.230487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:36.230498   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:36.230511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:36.303029   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:36.303053   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:36.303067   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:36.406305   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:36.406338   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:36.444044   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:36.444084   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:36.494829   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:36.494873   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.009579   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:39.023867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:39.023943   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:39.057426   61989 cri.go:89] found id: ""
	I0924 01:06:39.057458   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.057469   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:39.057477   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:39.057539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:39.091421   61989 cri.go:89] found id: ""
	I0924 01:06:39.091444   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.091453   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:39.091459   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:39.091518   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:39.125407   61989 cri.go:89] found id: ""
	I0924 01:06:39.125437   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.125448   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:39.125455   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:39.125525   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:39.157146   61989 cri.go:89] found id: ""
	I0924 01:06:39.157170   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.157181   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:39.157189   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:39.157248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:39.189474   61989 cri.go:89] found id: ""
	I0924 01:06:39.189501   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.189511   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:39.189518   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:39.189577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:39.228034   61989 cri.go:89] found id: ""
	I0924 01:06:39.228063   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.228084   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:39.228099   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:39.228158   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:39.268289   61989 cri.go:89] found id: ""
	I0924 01:06:39.268317   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.268345   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:39.268354   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:39.268431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:39.304964   61989 cri.go:89] found id: ""
	I0924 01:06:39.304988   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.304996   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:39.305005   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:39.305017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:39.356193   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:39.356234   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.370782   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:39.370807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:39.442395   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:39.442418   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:39.442429   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:39.518426   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:39.518466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:38.729606   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:41.228528   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.528236   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:43.028285   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.407021   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.906884   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:44.907822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.059895   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:42.092776   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:42.092837   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:42.128508   61989 cri.go:89] found id: ""
	I0924 01:06:42.128534   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.128555   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:42.128565   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:42.128623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:42.160961   61989 cri.go:89] found id: ""
	I0924 01:06:42.160989   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.161000   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:42.161008   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:42.161072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:42.194212   61989 cri.go:89] found id: ""
	I0924 01:06:42.194260   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.194272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:42.194280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:42.194342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:42.229284   61989 cri.go:89] found id: ""
	I0924 01:06:42.229312   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.229323   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:42.229331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:42.229378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:42.261952   61989 cri.go:89] found id: ""
	I0924 01:06:42.261986   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.261997   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:42.262010   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:42.262059   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:42.297096   61989 cri.go:89] found id: ""
	I0924 01:06:42.297125   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.297133   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:42.297139   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:42.297185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:42.333066   61989 cri.go:89] found id: ""
	I0924 01:06:42.333095   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.333106   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:42.333114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:42.333176   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:42.366798   61989 cri.go:89] found id: ""
	I0924 01:06:42.366829   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.366840   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:42.366852   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:42.366865   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:42.419424   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:42.419466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:42.433814   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:42.433846   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:42.503817   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:42.503845   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:42.503860   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:42.583249   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:42.583289   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:45.123746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:45.136292   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:45.136377   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:45.174390   61989 cri.go:89] found id: ""
	I0924 01:06:45.174420   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.174441   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:45.174449   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:45.174539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:45.212394   61989 cri.go:89] found id: ""
	I0924 01:06:45.212422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.212433   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:45.212441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:45.212503   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:45.245831   61989 cri.go:89] found id: ""
	I0924 01:06:45.245853   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.245861   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:45.245867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:45.245922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:45.277587   61989 cri.go:89] found id: ""
	I0924 01:06:45.277615   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.277626   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:45.277634   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:45.277692   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:45.309715   61989 cri.go:89] found id: ""
	I0924 01:06:45.309749   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.309760   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:45.309768   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:45.309827   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:45.342799   61989 cri.go:89] found id: ""
	I0924 01:06:45.342831   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.342844   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:45.342853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:45.342921   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:45.375377   61989 cri.go:89] found id: ""
	I0924 01:06:45.375404   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.375415   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:45.375423   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:45.375484   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:45.415395   61989 cri.go:89] found id: ""
	I0924 01:06:45.415422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.415432   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:45.415444   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:45.415459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:45.464381   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:45.464416   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:45.478142   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:45.478168   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:45.551211   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:45.551234   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:45.551244   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:45.635255   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:45.635297   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:43.728645   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:46.227611   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.228320   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:45.028650   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.528968   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.406822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:49.407790   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.173687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:48.186635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:48.186710   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:48.219544   61989 cri.go:89] found id: ""
	I0924 01:06:48.219566   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.219574   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:48.219583   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:48.219654   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:48.253594   61989 cri.go:89] found id: ""
	I0924 01:06:48.253618   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.253627   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:48.253634   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:48.253693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:48.287991   61989 cri.go:89] found id: ""
	I0924 01:06:48.288019   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.288031   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:48.288041   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:48.288100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:48.320738   61989 cri.go:89] found id: ""
	I0924 01:06:48.320767   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.320779   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:48.320787   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:48.320847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:48.352197   61989 cri.go:89] found id: ""
	I0924 01:06:48.352225   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.352233   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:48.352243   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:48.352317   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:48.386157   61989 cri.go:89] found id: ""
	I0924 01:06:48.386187   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.386195   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:48.386202   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:48.386250   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:48.422372   61989 cri.go:89] found id: ""
	I0924 01:06:48.422398   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.422407   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:48.422413   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:48.422463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:48.464007   61989 cri.go:89] found id: ""
	I0924 01:06:48.464032   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.464043   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:48.464054   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:48.464072   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.520533   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:48.520570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:48.594453   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:48.594489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:48.607309   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:48.607336   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:48.674078   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:48.674102   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:48.674117   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:50.740093   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.228567   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:50.028640   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:52.527656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.906378   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.906887   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.256855   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:51.270305   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:51.270378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:51.303450   61989 cri.go:89] found id: ""
	I0924 01:06:51.303487   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.303499   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:51.303508   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:51.303564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:51.336959   61989 cri.go:89] found id: ""
	I0924 01:06:51.336987   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.337003   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:51.337010   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:51.337072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:51.369210   61989 cri.go:89] found id: ""
	I0924 01:06:51.369239   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.369249   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:51.369260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:51.369339   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:51.403595   61989 cri.go:89] found id: ""
	I0924 01:06:51.403645   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.403658   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:51.403666   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:51.403723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:51.445459   61989 cri.go:89] found id: ""
	I0924 01:06:51.445493   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.445503   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:51.445510   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:51.445574   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:51.477615   61989 cri.go:89] found id: ""
	I0924 01:06:51.477642   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.477653   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:51.477660   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:51.477722   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:51.509737   61989 cri.go:89] found id: ""
	I0924 01:06:51.509766   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.509784   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:51.509792   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:51.509856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:51.546451   61989 cri.go:89] found id: ""
	I0924 01:06:51.546479   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.546489   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:51.546501   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:51.546515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:51.600277   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:51.600315   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:51.613403   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:51.613434   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:51.691645   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:51.691669   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:51.691688   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.772276   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:51.772312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:54.313491   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:54.328265   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:54.328374   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:54.368091   61989 cri.go:89] found id: ""
	I0924 01:06:54.368117   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.368126   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:54.368131   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:54.368183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:54.408272   61989 cri.go:89] found id: ""
	I0924 01:06:54.408300   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.408310   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:54.408318   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:54.408409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:54.460467   61989 cri.go:89] found id: ""
	I0924 01:06:54.460489   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.460499   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:54.460506   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:54.460564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:54.493310   61989 cri.go:89] found id: ""
	I0924 01:06:54.493334   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.493343   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:54.493349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:54.493401   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:54.526772   61989 cri.go:89] found id: ""
	I0924 01:06:54.526799   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.526809   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:54.526817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:54.526880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:54.562235   61989 cri.go:89] found id: ""
	I0924 01:06:54.562264   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.562274   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:54.562283   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:54.562345   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:54.597755   61989 cri.go:89] found id: ""
	I0924 01:06:54.597784   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.597794   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:54.597803   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:54.597851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:54.632225   61989 cri.go:89] found id: ""
	I0924 01:06:54.632282   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.632295   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:54.632305   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:54.632321   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:54.683849   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:54.683887   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:54.697395   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:54.697425   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:54.767577   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:54.767598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:54.767609   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:54.842619   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:54.842655   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:55.728756   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:58.228520   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:54.528783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.028039   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:59.028234   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:55.907673   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.907858   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.381394   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:57.394078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:57.394147   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:57.431241   61989 cri.go:89] found id: ""
	I0924 01:06:57.431266   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.431278   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:57.431284   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:57.431352   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:57.468954   61989 cri.go:89] found id: ""
	I0924 01:06:57.468983   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.468994   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:57.469001   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:57.469060   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:57.503518   61989 cri.go:89] found id: ""
	I0924 01:06:57.503550   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.503562   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:57.503570   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:57.503618   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:57.540432   61989 cri.go:89] found id: ""
	I0924 01:06:57.540464   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.540475   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:57.540483   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:57.540548   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:57.574142   61989 cri.go:89] found id: ""
	I0924 01:06:57.574175   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.574187   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:57.574195   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:57.574264   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:57.608505   61989 cri.go:89] found id: ""
	I0924 01:06:57.608528   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.608537   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:57.608543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:57.608589   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:57.644273   61989 cri.go:89] found id: ""
	I0924 01:06:57.644305   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.644317   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:57.644344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:57.644409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:57.682023   61989 cri.go:89] found id: ""
	I0924 01:06:57.682050   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.682060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:57.682072   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:57.682086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:57.732537   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:57.732570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:57.746632   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:57.746663   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:57.813904   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:57.813927   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:57.813947   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:57.891947   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:57.891992   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.432035   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:00.444886   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:00.444966   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:00.482653   61989 cri.go:89] found id: ""
	I0924 01:07:00.482683   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.482694   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:00.482702   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:00.482754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:00.516404   61989 cri.go:89] found id: ""
	I0924 01:07:00.516441   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.516452   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:00.516463   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:00.516527   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:00.552938   61989 cri.go:89] found id: ""
	I0924 01:07:00.552963   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.552971   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:00.552977   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:00.553043   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:00.589143   61989 cri.go:89] found id: ""
	I0924 01:07:00.589170   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.589178   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:00.589184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:00.589235   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:00.625023   61989 cri.go:89] found id: ""
	I0924 01:07:00.625047   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.625059   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:00.625066   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:00.625127   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:00.662904   61989 cri.go:89] found id: ""
	I0924 01:07:00.662936   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.662948   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:00.662959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:00.663022   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:00.702892   61989 cri.go:89] found id: ""
	I0924 01:07:00.702921   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.702932   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:00.702938   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:00.702988   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:00.737010   61989 cri.go:89] found id: ""
	I0924 01:07:00.737039   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.737050   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:00.737061   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:00.737075   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:00.788093   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:00.788132   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:00.801354   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:00.801382   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:00.866830   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:00.866862   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:00.866878   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:00.950034   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:00.950076   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.728279   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.227980   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:01.527849   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.027729   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:00.406445   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:02.407048   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.907569   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.492773   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:03.506158   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:03.506224   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:03.542369   61989 cri.go:89] found id: ""
	I0924 01:07:03.542397   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.542408   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:03.542416   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:03.542473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:03.575019   61989 cri.go:89] found id: ""
	I0924 01:07:03.575046   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.575055   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:03.575060   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:03.575103   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:03.608576   61989 cri.go:89] found id: ""
	I0924 01:07:03.608603   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.608612   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:03.608619   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:03.608684   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:03.642359   61989 cri.go:89] found id: ""
	I0924 01:07:03.642389   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.642400   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:03.642407   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:03.642463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:03.678192   61989 cri.go:89] found id: ""
	I0924 01:07:03.678216   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.678223   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:03.678229   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:03.678285   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:03.711773   61989 cri.go:89] found id: ""
	I0924 01:07:03.711795   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.711803   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:03.711809   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:03.711856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:03.747792   61989 cri.go:89] found id: ""
	I0924 01:07:03.747819   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.747830   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:03.747838   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:03.747901   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:03.783284   61989 cri.go:89] found id: ""
	I0924 01:07:03.783312   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.783320   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:03.783331   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:03.783349   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:03.838704   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:03.838745   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:03.852650   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:03.852675   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:03.922474   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:03.922499   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:03.922511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:03.997349   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:03.997388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:05.228357   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:07.228789   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.028604   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:08.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.908041   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:09.406803   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.537182   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:06.549745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:06.549833   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:06.587879   61989 cri.go:89] found id: ""
	I0924 01:07:06.587910   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.587922   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:06.587930   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:06.587984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:06.623419   61989 cri.go:89] found id: ""
	I0924 01:07:06.623447   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.623456   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:06.623462   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:06.623542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:06.659228   61989 cri.go:89] found id: ""
	I0924 01:07:06.659260   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.659272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:06.659280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:06.659341   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:06.693300   61989 cri.go:89] found id: ""
	I0924 01:07:06.693330   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.693341   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:06.693349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:06.693399   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:06.726237   61989 cri.go:89] found id: ""
	I0924 01:07:06.726267   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.726278   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:06.726286   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:06.726342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:06.760627   61989 cri.go:89] found id: ""
	I0924 01:07:06.760659   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.760670   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:06.760677   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:06.760745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:06.796029   61989 cri.go:89] found id: ""
	I0924 01:07:06.796062   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.796073   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:06.796081   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:06.796136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:06.830197   61989 cri.go:89] found id: ""
	I0924 01:07:06.830230   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.830241   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:06.830251   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:06.830265   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.869055   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:06.869087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:06.923840   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:06.923888   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:06.937510   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:06.937549   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:07.011461   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:07.011482   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:07.011496   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:09.591186   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:09.603900   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:09.603970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:09.639003   61989 cri.go:89] found id: ""
	I0924 01:07:09.639035   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.639046   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:09.639055   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:09.639111   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:09.676494   61989 cri.go:89] found id: ""
	I0924 01:07:09.676528   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.676539   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:09.676547   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:09.676616   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:09.713080   61989 cri.go:89] found id: ""
	I0924 01:07:09.713103   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.713111   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:09.713117   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:09.713174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:09.748425   61989 cri.go:89] found id: ""
	I0924 01:07:09.748449   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.748458   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:09.748465   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:09.748521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:09.782526   61989 cri.go:89] found id: ""
	I0924 01:07:09.782559   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.782576   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:09.782584   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:09.782647   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:09.819137   61989 cri.go:89] found id: ""
	I0924 01:07:09.819159   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.819167   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:09.819173   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:09.819256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:09.852953   61989 cri.go:89] found id: ""
	I0924 01:07:09.852976   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.852984   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:09.852989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:09.853083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:09.887254   61989 cri.go:89] found id: ""
	I0924 01:07:09.887282   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.887293   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:09.887304   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:09.887318   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:09.940029   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:09.940069   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:09.954298   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:09.954331   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:10.028926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:10.028947   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:10.028957   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:10.116722   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:10.116761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:09.728996   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.228342   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:10.527637   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.528324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:11.410452   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:13.906451   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.654245   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:12.668635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:12.668695   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:12.711575   61989 cri.go:89] found id: ""
	I0924 01:07:12.711601   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.711626   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:12.711632   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:12.711682   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:12.746104   61989 cri.go:89] found id: ""
	I0924 01:07:12.746131   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.746141   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:12.746149   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:12.746210   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:12.780229   61989 cri.go:89] found id: ""
	I0924 01:07:12.780260   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.780295   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:12.780303   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:12.780384   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:12.812968   61989 cri.go:89] found id: ""
	I0924 01:07:12.812998   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.813010   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:12.813024   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:12.813090   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:12.844212   61989 cri.go:89] found id: ""
	I0924 01:07:12.844241   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.844253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:12.844260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:12.844343   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:12.878662   61989 cri.go:89] found id: ""
	I0924 01:07:12.878690   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.878700   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:12.878707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:12.878765   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:12.912782   61989 cri.go:89] found id: ""
	I0924 01:07:12.912805   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.912815   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:12.912822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:12.912883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:12.945694   61989 cri.go:89] found id: ""
	I0924 01:07:12.945726   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.945736   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:12.945747   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:12.945761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:12.994841   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:12.994877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:13.009582   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:13.009624   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:13.081972   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:13.081999   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:13.082017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:13.162383   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:13.162420   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:15.704586   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:15.717608   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:15.717677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:15.751794   61989 cri.go:89] found id: ""
	I0924 01:07:15.751829   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.751840   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:15.751848   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:15.751916   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:15.791691   61989 cri.go:89] found id: ""
	I0924 01:07:15.791723   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.791734   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:15.791742   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:15.791805   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:15.827934   61989 cri.go:89] found id: ""
	I0924 01:07:15.827957   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.827965   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:15.827971   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:15.828017   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:15.862489   61989 cri.go:89] found id: ""
	I0924 01:07:15.862518   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.862527   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:15.862532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:15.862577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:15.896754   61989 cri.go:89] found id: ""
	I0924 01:07:15.896786   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.896798   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:15.896804   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:15.896857   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:15.934353   61989 cri.go:89] found id: ""
	I0924 01:07:15.934378   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.934386   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:15.934392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:15.934436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:15.969204   61989 cri.go:89] found id: ""
	I0924 01:07:15.969237   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.969246   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:15.969251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:15.969309   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:14.228949   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.728382   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.027681   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:17.027847   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.907872   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:18.407563   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.008733   61989 cri.go:89] found id: ""
	I0924 01:07:16.008767   61989 logs.go:276] 0 containers: []
	W0924 01:07:16.008780   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:16.008792   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:16.008807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:16.046993   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:16.047024   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:16.098768   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:16.098801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:16.114429   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:16.114472   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:16.187450   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:16.187469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:16.187489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:18.767042   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:18.779825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:18.779899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:18.815410   61989 cri.go:89] found id: ""
	I0924 01:07:18.815436   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.815447   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:18.815454   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:18.815523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:18.849837   61989 cri.go:89] found id: ""
	I0924 01:07:18.849862   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.849872   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:18.849880   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:18.849952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:18.885183   61989 cri.go:89] found id: ""
	I0924 01:07:18.885215   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.885227   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:18.885235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:18.885314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:18.922263   61989 cri.go:89] found id: ""
	I0924 01:07:18.922293   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.922305   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:18.922312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:18.922378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:18.957235   61989 cri.go:89] found id: ""
	I0924 01:07:18.957263   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.957272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:18.957278   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:18.957331   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:18.989846   61989 cri.go:89] found id: ""
	I0924 01:07:18.989870   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.989878   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:18.989884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:18.989931   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:19.027264   61989 cri.go:89] found id: ""
	I0924 01:07:19.027298   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.027308   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:19.027315   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:19.027373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:19.065902   61989 cri.go:89] found id: ""
	I0924 01:07:19.065925   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.065934   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:19.065944   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:19.065959   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:19.115515   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:19.115550   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:19.129761   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:19.129787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:19.200299   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:19.200319   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:19.200351   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:19.282308   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:19.282360   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:18.732314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.227773   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.228957   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:19.528117   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:22.028965   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:20.906860   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.407404   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.819442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:21.834106   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:21.834165   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:21.866953   61989 cri.go:89] found id: ""
	I0924 01:07:21.866988   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.866999   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:21.867008   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:21.867085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:21.902561   61989 cri.go:89] found id: ""
	I0924 01:07:21.902637   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.902654   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:21.902663   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:21.902729   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:21.936883   61989 cri.go:89] found id: ""
	I0924 01:07:21.936926   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.936937   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:21.936943   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:21.936995   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:21.975375   61989 cri.go:89] found id: ""
	I0924 01:07:21.975402   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.975411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:21.975417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:21.975465   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:22.012782   61989 cri.go:89] found id: ""
	I0924 01:07:22.012811   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.012822   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:22.012830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:22.012890   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:22.049344   61989 cri.go:89] found id: ""
	I0924 01:07:22.049370   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.049379   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:22.049385   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:22.049442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:22.088187   61989 cri.go:89] found id: ""
	I0924 01:07:22.088219   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.088230   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:22.088239   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:22.088324   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:22.123357   61989 cri.go:89] found id: ""
	I0924 01:07:22.123386   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.123397   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:22.123408   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:22.123423   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:22.176794   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:22.176828   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:22.192550   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:22.192591   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:22.263854   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:22.263881   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:22.263898   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:22.341735   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:22.341778   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:24.879834   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:24.892429   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:24.892504   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:24.926600   61989 cri.go:89] found id: ""
	I0924 01:07:24.926629   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.926636   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:24.926642   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:24.926689   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:24.960370   61989 cri.go:89] found id: ""
	I0924 01:07:24.960399   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.960408   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:24.960415   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:24.960471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:24.993503   61989 cri.go:89] found id: ""
	I0924 01:07:24.993532   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.993542   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:24.993549   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:24.993611   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:25.028027   61989 cri.go:89] found id: ""
	I0924 01:07:25.028055   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.028065   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:25.028073   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:25.028129   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:25.062947   61989 cri.go:89] found id: ""
	I0924 01:07:25.062981   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.062999   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:25.063009   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:25.063077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:25.098895   61989 cri.go:89] found id: ""
	I0924 01:07:25.098927   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.098939   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:25.098946   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:25.098996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:25.132786   61989 cri.go:89] found id: ""
	I0924 01:07:25.132814   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.132824   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:25.132830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:25.132882   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:25.167603   61989 cri.go:89] found id: ""
	I0924 01:07:25.167634   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.167644   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:25.167656   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:25.167671   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:25.220265   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:25.220303   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:25.234840   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:25.234884   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:25.307459   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:25.307485   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:25.307499   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:25.386496   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:25.386537   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:25.229188   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.728978   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:24.531829   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.027182   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:29.029000   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:25.907018   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:28.406555   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.926064   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:27.939398   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:27.939480   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:27.976184   61989 cri.go:89] found id: ""
	I0924 01:07:27.976215   61989 logs.go:276] 0 containers: []
	W0924 01:07:27.976256   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:27.976265   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:27.976348   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:28.009389   61989 cri.go:89] found id: ""
	I0924 01:07:28.009419   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.009431   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:28.009438   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:28.009501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:28.045562   61989 cri.go:89] found id: ""
	I0924 01:07:28.045594   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.045605   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:28.045613   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:28.045677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:28.085318   61989 cri.go:89] found id: ""
	I0924 01:07:28.085345   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.085357   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:28.085364   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:28.085419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:28.119582   61989 cri.go:89] found id: ""
	I0924 01:07:28.119607   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.119617   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:28.119626   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:28.119690   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:28.151445   61989 cri.go:89] found id: ""
	I0924 01:07:28.151493   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.151505   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:28.151513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:28.151578   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:28.185966   61989 cri.go:89] found id: ""
	I0924 01:07:28.185997   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.186009   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:28.186016   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:28.186078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:28.219012   61989 cri.go:89] found id: ""
	I0924 01:07:28.219037   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.219044   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:28.219052   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:28.219089   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:28.272186   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:28.272222   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:28.286346   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:28.286383   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:28.370949   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:28.370975   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:28.370985   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:28.453740   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:28.453775   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:30.229141   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.728919   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:31.527080   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.028315   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.407040   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.407075   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.407711   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.993536   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:31.006297   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:31.006369   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:31.042081   61989 cri.go:89] found id: ""
	I0924 01:07:31.042114   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.042123   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:31.042129   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:31.042185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:31.077119   61989 cri.go:89] found id: ""
	I0924 01:07:31.077144   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.077153   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:31.077159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:31.077208   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:31.110148   61989 cri.go:89] found id: ""
	I0924 01:07:31.110179   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.110187   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:31.110193   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:31.110246   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:31.143551   61989 cri.go:89] found id: ""
	I0924 01:07:31.143578   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.143585   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:31.143591   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:31.143638   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:31.177212   61989 cri.go:89] found id: ""
	I0924 01:07:31.177262   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.177272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:31.177279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:31.177329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:31.209290   61989 cri.go:89] found id: ""
	I0924 01:07:31.209321   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.209332   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:31.209340   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:31.209398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:31.247299   61989 cri.go:89] found id: ""
	I0924 01:07:31.247334   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.247346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:31.247355   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:31.247419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:31.285010   61989 cri.go:89] found id: ""
	I0924 01:07:31.285047   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.285060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:31.285072   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:31.285087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:31.323819   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:31.323855   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:31.378348   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:31.378388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:31.393944   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:31.393983   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:31.464940   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:31.464966   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:31.464978   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.042144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:34.055183   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:34.055268   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:34.103044   61989 cri.go:89] found id: ""
	I0924 01:07:34.103075   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.103086   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:34.103094   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:34.103162   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:34.141379   61989 cri.go:89] found id: ""
	I0924 01:07:34.141412   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.141424   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:34.141432   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:34.141493   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:34.179545   61989 cri.go:89] found id: ""
	I0924 01:07:34.179574   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.179582   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:34.179588   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:34.179655   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:34.217683   61989 cri.go:89] found id: ""
	I0924 01:07:34.217719   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.217739   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:34.217748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:34.217806   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:34.257597   61989 cri.go:89] found id: ""
	I0924 01:07:34.257630   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.257642   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:34.257651   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:34.257723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:34.295410   61989 cri.go:89] found id: ""
	I0924 01:07:34.295440   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.295452   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:34.295460   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:34.295523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:34.331309   61989 cri.go:89] found id: ""
	I0924 01:07:34.331340   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.331350   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:34.331358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:34.331460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:34.367549   61989 cri.go:89] found id: ""
	I0924 01:07:34.367580   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.367590   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:34.367601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:34.367615   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:34.421785   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:34.421823   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:34.435162   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:34.435198   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:34.504051   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:34.504073   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:34.504090   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.582343   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:34.582384   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:35.229391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.229522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.527047   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.527472   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.906974   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.907529   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.124727   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:37.139374   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:37.139431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:37.176474   61989 cri.go:89] found id: ""
	I0924 01:07:37.176500   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.176510   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:37.176515   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:37.176560   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:37.209944   61989 cri.go:89] found id: ""
	I0924 01:07:37.209971   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.209983   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:37.209990   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:37.210055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:37.242894   61989 cri.go:89] found id: ""
	I0924 01:07:37.242923   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.242933   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:37.242941   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:37.242996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:37.276517   61989 cri.go:89] found id: ""
	I0924 01:07:37.276547   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.276558   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:37.276566   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:37.276626   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:37.310169   61989 cri.go:89] found id: ""
	I0924 01:07:37.310196   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.310207   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:37.310214   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:37.310282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:37.342992   61989 cri.go:89] found id: ""
	I0924 01:07:37.343019   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.343027   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:37.343035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:37.343088   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:37.375024   61989 cri.go:89] found id: ""
	I0924 01:07:37.375051   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.375062   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:37.375069   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:37.375137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:37.409736   61989 cri.go:89] found id: ""
	I0924 01:07:37.409761   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.409768   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:37.409776   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:37.409787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:37.474744   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:37.474767   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:37.474783   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:37.551479   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:37.551515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.590597   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:37.590632   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:37.642781   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:37.642820   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.156480   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:40.171002   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:40.171079   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:40.207383   61989 cri.go:89] found id: ""
	I0924 01:07:40.207410   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.207418   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:40.207424   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:40.207474   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:40.245535   61989 cri.go:89] found id: ""
	I0924 01:07:40.245560   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.245568   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:40.245574   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:40.245620   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:40.283858   61989 cri.go:89] found id: ""
	I0924 01:07:40.283888   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.283900   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:40.283909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:40.283982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:40.320527   61989 cri.go:89] found id: ""
	I0924 01:07:40.320555   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.320566   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:40.320575   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:40.320633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:40.354364   61989 cri.go:89] found id: ""
	I0924 01:07:40.354390   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.354397   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:40.354403   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:40.354473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:40.388407   61989 cri.go:89] found id: ""
	I0924 01:07:40.388431   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.388439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:40.388444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:40.388512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:40.423809   61989 cri.go:89] found id: ""
	I0924 01:07:40.423838   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.423847   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:40.423853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:40.423908   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:40.459160   61989 cri.go:89] found id: ""
	I0924 01:07:40.459188   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.459199   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:40.459210   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:40.459223   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:40.530418   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:40.530456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.551644   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:40.551683   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:40.634564   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:40.634587   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:40.634599   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:40.717897   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:40.717934   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:39.728642   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.728725   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:40.528294   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.028364   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.406835   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.907015   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.257992   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:43.272134   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:43.272204   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:43.306747   61989 cri.go:89] found id: ""
	I0924 01:07:43.306775   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.306797   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:43.306806   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:43.306923   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:43.342922   61989 cri.go:89] found id: ""
	I0924 01:07:43.342954   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.342963   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:43.342974   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:43.343028   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:43.378666   61989 cri.go:89] found id: ""
	I0924 01:07:43.378694   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.378703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:43.378709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:43.378760   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:43.414348   61989 cri.go:89] found id: ""
	I0924 01:07:43.414376   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.414387   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:43.414395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:43.414457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:43.447687   61989 cri.go:89] found id: ""
	I0924 01:07:43.447718   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.447728   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:43.447735   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:43.447804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:43.482166   61989 cri.go:89] found id: ""
	I0924 01:07:43.482195   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.482205   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:43.482211   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:43.482275   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:43.518112   61989 cri.go:89] found id: ""
	I0924 01:07:43.518146   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.518159   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:43.518167   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:43.518231   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:43.553853   61989 cri.go:89] found id: ""
	I0924 01:07:43.553875   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.553883   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:43.553891   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:43.553902   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:43.603410   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:43.603445   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:43.616413   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:43.616438   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:43.685077   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:43.685101   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:43.685113   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:43.760758   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:43.760803   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:43.729237   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.228084   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.228503   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:45.527095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:47.529540   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.407150   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.407253   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.300532   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:46.315982   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:46.316050   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:46.356523   61989 cri.go:89] found id: ""
	I0924 01:07:46.356554   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.356565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:46.356573   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:46.356633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:46.405399   61989 cri.go:89] found id: ""
	I0924 01:07:46.405429   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.405439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:46.405447   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:46.405512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:46.454819   61989 cri.go:89] found id: ""
	I0924 01:07:46.454844   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.454853   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:46.454858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:46.454918   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:46.499094   61989 cri.go:89] found id: ""
	I0924 01:07:46.499123   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.499134   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:46.499142   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:46.499196   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:46.532976   61989 cri.go:89] found id: ""
	I0924 01:07:46.533006   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.533017   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:46.533025   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:46.533083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:46.565488   61989 cri.go:89] found id: ""
	I0924 01:07:46.565523   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.565534   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:46.565546   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:46.565610   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:46.598457   61989 cri.go:89] found id: ""
	I0924 01:07:46.598486   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.598496   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:46.598503   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:46.598551   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:46.631892   61989 cri.go:89] found id: ""
	I0924 01:07:46.631920   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.631931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:46.631941   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:46.631956   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:46.709966   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:46.710013   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.749154   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:46.749184   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:46.798192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:46.798228   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:46.811902   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:46.811951   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:46.885878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.386775   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:49.399324   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:49.399383   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:49.437061   61989 cri.go:89] found id: ""
	I0924 01:07:49.437092   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.437104   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:49.437111   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:49.437160   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:49.470882   61989 cri.go:89] found id: ""
	I0924 01:07:49.470908   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.470919   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:49.470927   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:49.470989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:49.506894   61989 cri.go:89] found id: ""
	I0924 01:07:49.506926   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.506938   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:49.506947   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:49.507018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:49.540768   61989 cri.go:89] found id: ""
	I0924 01:07:49.540800   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.540813   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:49.540822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:49.540888   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:49.576486   61989 cri.go:89] found id: ""
	I0924 01:07:49.576515   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.576523   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:49.576530   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:49.576579   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:49.612456   61989 cri.go:89] found id: ""
	I0924 01:07:49.612479   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.612487   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:49.612495   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:49.612542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:49.646085   61989 cri.go:89] found id: ""
	I0924 01:07:49.646118   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.646127   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:49.646132   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:49.646178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:49.682538   61989 cri.go:89] found id: ""
	I0924 01:07:49.682565   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.682574   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:49.682583   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:49.682594   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:49.721791   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:49.721817   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:49.774842   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:49.774889   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:49.789082   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:49.789129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:49.866437   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.866464   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:49.866478   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:50.727581   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.027396   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.028176   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.407654   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.908118   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.445166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:52.459060   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:52.459126   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:52.496521   61989 cri.go:89] found id: ""
	I0924 01:07:52.496550   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.496562   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:52.496571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:52.496652   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:52.533575   61989 cri.go:89] found id: ""
	I0924 01:07:52.533600   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.533608   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:52.533615   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:52.533693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:52.571666   61989 cri.go:89] found id: ""
	I0924 01:07:52.571693   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.571703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:52.571710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:52.571758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:52.603929   61989 cri.go:89] found id: ""
	I0924 01:07:52.603957   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.603968   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:52.603976   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:52.604034   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:52.635581   61989 cri.go:89] found id: ""
	I0924 01:07:52.635607   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.635614   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:52.635620   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:52.635669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:52.673865   61989 cri.go:89] found id: ""
	I0924 01:07:52.673889   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.673897   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:52.673903   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:52.673953   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:52.709885   61989 cri.go:89] found id: ""
	I0924 01:07:52.709910   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.709918   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:52.709925   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:52.709986   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:52.746409   61989 cri.go:89] found id: ""
	I0924 01:07:52.746439   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.746450   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:52.746461   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:52.746475   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:52.798020   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:52.798054   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:52.811940   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:52.811967   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:52.888091   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:52.888114   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:52.888129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.968955   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:52.969000   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:55.507204   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:55.520581   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:55.520657   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:55.555772   61989 cri.go:89] found id: ""
	I0924 01:07:55.555809   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.555821   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:55.555828   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:55.555880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:55.593765   61989 cri.go:89] found id: ""
	I0924 01:07:55.593791   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.593802   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:55.593808   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:55.593866   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:55.630292   61989 cri.go:89] found id: ""
	I0924 01:07:55.630325   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.630337   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:55.630344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:55.630408   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:55.665703   61989 cri.go:89] found id: ""
	I0924 01:07:55.665730   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.665741   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:55.665748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:55.665807   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:55.701911   61989 cri.go:89] found id: ""
	I0924 01:07:55.701938   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.701949   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:55.701957   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:55.702020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:55.734343   61989 cri.go:89] found id: ""
	I0924 01:07:55.734373   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.734385   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:55.734394   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:55.734460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:55.768606   61989 cri.go:89] found id: ""
	I0924 01:07:55.768633   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.768645   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:55.768653   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:55.768716   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:55.800720   61989 cri.go:89] found id: ""
	I0924 01:07:55.800747   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.800757   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:55.800768   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:55.800782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:55.851702   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:55.851737   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:55.865657   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:55.865687   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:55.940175   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:55.940197   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:55.940207   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:55.227954   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.228969   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:54.528417   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.529326   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:59.027653   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:55.407038   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.906886   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.015832   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:56.015870   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:58.557571   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:58.572208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:58.572274   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:58.605081   61989 cri.go:89] found id: ""
	I0924 01:07:58.605109   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.605121   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:58.605128   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:58.605185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:58.641518   61989 cri.go:89] found id: ""
	I0924 01:07:58.641548   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.641559   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:58.641566   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:58.641617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:58.680623   61989 cri.go:89] found id: ""
	I0924 01:07:58.680653   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.680664   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:58.680675   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:58.680735   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:58.713658   61989 cri.go:89] found id: ""
	I0924 01:07:58.713684   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.713693   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:58.713700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:58.713754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:58.746264   61989 cri.go:89] found id: ""
	I0924 01:07:58.746298   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.746307   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:58.746313   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:58.746358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:58.779812   61989 cri.go:89] found id: ""
	I0924 01:07:58.779846   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.779912   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:58.779924   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:58.779984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:58.813203   61989 cri.go:89] found id: ""
	I0924 01:07:58.813236   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.813245   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:58.813252   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:58.813303   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:58.845872   61989 cri.go:89] found id: ""
	I0924 01:07:58.845898   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.845906   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:58.845915   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:58.845925   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:58.897480   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:58.897515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:58.912904   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:58.912936   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:58.982882   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:58.982908   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:58.982921   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:59.058495   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:59.058535   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:59.729215   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.228358   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.028678   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:03.527682   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:00.407897   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.907608   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:04.907717   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.596672   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:01.609550   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:01.609625   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:01.648819   61989 cri.go:89] found id: ""
	I0924 01:08:01.648847   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.648857   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:01.648864   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:01.649000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:01.685419   61989 cri.go:89] found id: ""
	I0924 01:08:01.685450   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.685458   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:01.685464   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:01.685533   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:01.720426   61989 cri.go:89] found id: ""
	I0924 01:08:01.720455   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.720464   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:01.720473   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:01.720537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:01.755292   61989 cri.go:89] found id: ""
	I0924 01:08:01.755316   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.755324   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:01.755331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:01.755398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:01.788673   61989 cri.go:89] found id: ""
	I0924 01:08:01.788703   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.788713   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:01.788721   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:01.788789   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:01.824724   61989 cri.go:89] found id: ""
	I0924 01:08:01.824761   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.824773   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:01.824781   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:01.824838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:01.858492   61989 cri.go:89] found id: ""
	I0924 01:08:01.858531   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.858542   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:01.858556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:01.858623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:01.892135   61989 cri.go:89] found id: ""
	I0924 01:08:01.892167   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.892177   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:01.892192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:01.892205   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:01.905820   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:01.905849   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:01.977998   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:01.978026   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:01.978039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:02.060441   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:02.060480   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:02.100029   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:02.100057   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.653124   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:04.665726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:04.665784   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:04.700755   61989 cri.go:89] found id: ""
	I0924 01:08:04.700785   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.700796   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:04.700804   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:04.700858   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:04.736955   61989 cri.go:89] found id: ""
	I0924 01:08:04.736983   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.736992   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:04.736998   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:04.737051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:04.770940   61989 cri.go:89] found id: ""
	I0924 01:08:04.770969   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.770977   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:04.770983   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:04.771051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:04.805376   61989 cri.go:89] found id: ""
	I0924 01:08:04.805403   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.805411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:04.805417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:04.805471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:04.840995   61989 cri.go:89] found id: ""
	I0924 01:08:04.841016   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.841024   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:04.841030   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:04.841077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:04.875418   61989 cri.go:89] found id: ""
	I0924 01:08:04.875449   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.875460   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:04.875468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:04.875546   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:04.910675   61989 cri.go:89] found id: ""
	I0924 01:08:04.910696   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.910704   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:04.910710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:04.910764   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:04.945531   61989 cri.go:89] found id: ""
	I0924 01:08:04.945562   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.945570   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:04.945578   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:04.945589   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.997696   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:04.997734   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:05.011296   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:05.011329   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:05.087878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:05.087905   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:05.087919   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:05.164073   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:05.164111   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:04.228985   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.734525   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.031377   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:08.528160   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.908017   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:09.407255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:07.713496   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:07.726590   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:07.726649   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:07.760050   61989 cri.go:89] found id: ""
	I0924 01:08:07.760081   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.760092   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:07.760100   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:07.760152   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:07.797709   61989 cri.go:89] found id: ""
	I0924 01:08:07.797736   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.797744   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:07.797749   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:07.797803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:07.836351   61989 cri.go:89] found id: ""
	I0924 01:08:07.836380   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.836391   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:07.836399   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:07.836471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:07.871133   61989 cri.go:89] found id: ""
	I0924 01:08:07.871159   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.871170   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:07.871178   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:07.871229   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:07.906640   61989 cri.go:89] found id: ""
	I0924 01:08:07.906663   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.906673   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:07.906682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:07.906741   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:07.940919   61989 cri.go:89] found id: ""
	I0924 01:08:07.940945   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.940953   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:07.940959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:07.941018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:07.975533   61989 cri.go:89] found id: ""
	I0924 01:08:07.975562   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.975570   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:07.975576   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:07.975627   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:08.009137   61989 cri.go:89] found id: ""
	I0924 01:08:08.009163   61989 logs.go:276] 0 containers: []
	W0924 01:08:08.009173   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:08.009183   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:08.009196   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:08.065199   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:08.065252   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:08.080159   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:08.080188   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:08.154003   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:08.154025   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:08.154039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:08.235522   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:08.235561   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:10.774666   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:10.787704   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:10.787775   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:10.822721   61989 cri.go:89] found id: ""
	I0924 01:08:10.822759   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.822770   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:10.822781   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:10.822852   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:10.857113   61989 cri.go:89] found id: ""
	I0924 01:08:10.857138   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.857146   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:10.857152   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:10.857201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:10.890974   61989 cri.go:89] found id: ""
	I0924 01:08:10.891001   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.891012   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:10.891020   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:10.891086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:10.929771   61989 cri.go:89] found id: ""
	I0924 01:08:10.929793   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.929800   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:10.929806   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:10.929851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:10.961988   61989 cri.go:89] found id: ""
	I0924 01:08:10.962015   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.962027   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:10.962035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:10.962100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:09.228600   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.729142   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.528626   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.027656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.906981   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.907232   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.993591   61989 cri.go:89] found id: ""
	I0924 01:08:10.993622   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.993633   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:10.993639   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:10.993691   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:11.032468   61989 cri.go:89] found id: ""
	I0924 01:08:11.032496   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.032506   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:11.032514   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:11.032576   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:11.066900   61989 cri.go:89] found id: ""
	I0924 01:08:11.066924   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.066931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:11.066939   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:11.066950   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:11.136412   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:11.136443   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:11.136459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:11.218326   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:11.218361   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:11.260695   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:11.260728   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:11.310102   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:11.310133   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:13.825540   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:13.838208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:13.838283   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:13.874539   61989 cri.go:89] found id: ""
	I0924 01:08:13.874567   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.874576   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:13.874581   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:13.874628   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:13.911818   61989 cri.go:89] found id: ""
	I0924 01:08:13.911839   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.911846   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:13.911852   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:13.911897   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:13.944766   61989 cri.go:89] found id: ""
	I0924 01:08:13.944789   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.944797   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:13.944802   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:13.944847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:13.980712   61989 cri.go:89] found id: ""
	I0924 01:08:13.980742   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.980752   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:13.980758   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:13.980817   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:14.016103   61989 cri.go:89] found id: ""
	I0924 01:08:14.016130   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.016138   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:14.016143   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:14.016192   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:14.051884   61989 cri.go:89] found id: ""
	I0924 01:08:14.051929   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.051943   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:14.051954   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:14.052046   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:14.088928   61989 cri.go:89] found id: ""
	I0924 01:08:14.088954   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.088964   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:14.088970   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:14.089020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:14.123057   61989 cri.go:89] found id: ""
	I0924 01:08:14.123083   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.123091   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:14.123099   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:14.123112   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:14.174249   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:14.174287   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:14.188409   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:14.188442   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:14.258906   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:14.258932   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:14.258942   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:14.340891   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:14.340928   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:14.229459   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:16.728316   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.028158   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.527615   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.907490   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.907845   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.901512   61323 pod_ready.go:82] duration metric: took 4m0.001092501s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:19.901552   61323 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:08:19.901576   61323 pod_ready.go:39] duration metric: took 4m10.04955154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:19.901606   61323 kubeadm.go:597] duration metric: took 4m18.184472182s to restartPrimaryControlPlane
	W0924 01:08:19.901701   61323 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:19.901736   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:16.877728   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:16.890548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:16.890617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:16.924414   61989 cri.go:89] found id: ""
	I0924 01:08:16.924439   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.924451   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:16.924458   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:16.924510   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:16.960295   61989 cri.go:89] found id: ""
	I0924 01:08:16.960323   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.960344   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:16.960352   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:16.960405   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:16.993171   61989 cri.go:89] found id: ""
	I0924 01:08:16.993204   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.993216   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:16.993224   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:16.993287   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:17.028122   61989 cri.go:89] found id: ""
	I0924 01:08:17.028150   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.028160   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:17.028169   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:17.028261   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:17.068401   61989 cri.go:89] found id: ""
	I0924 01:08:17.068440   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.068451   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:17.068458   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:17.068530   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:17.104250   61989 cri.go:89] found id: ""
	I0924 01:08:17.104275   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.104283   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:17.104299   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:17.104370   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:17.139178   61989 cri.go:89] found id: ""
	I0924 01:08:17.139201   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.139209   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:17.139215   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:17.139288   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:17.172677   61989 cri.go:89] found id: ""
	I0924 01:08:17.172703   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.172712   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:17.172727   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:17.172742   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:17.222039   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:17.222082   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:17.235342   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:17.235371   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:17.300313   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:17.300350   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:17.300366   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:17.382465   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:17.382517   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.924928   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:19.941406   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:19.941496   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:19.976196   61989 cri.go:89] found id: ""
	I0924 01:08:19.976224   61989 logs.go:276] 0 containers: []
	W0924 01:08:19.976238   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:19.976247   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:19.976314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:20.019652   61989 cri.go:89] found id: ""
	I0924 01:08:20.019680   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.019692   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:20.019699   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:20.019757   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:20.055098   61989 cri.go:89] found id: ""
	I0924 01:08:20.055123   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.055130   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:20.055135   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:20.055183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:20.091428   61989 cri.go:89] found id: ""
	I0924 01:08:20.091458   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.091469   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:20.091476   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:20.091532   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:20.123608   61989 cri.go:89] found id: ""
	I0924 01:08:20.123642   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.123653   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:20.123678   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:20.123745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:20.165885   61989 cri.go:89] found id: ""
	I0924 01:08:20.165913   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.165926   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:20.165934   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:20.165985   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:20.199300   61989 cri.go:89] found id: ""
	I0924 01:08:20.199329   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.199341   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:20.199348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:20.199415   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:20.237201   61989 cri.go:89] found id: ""
	I0924 01:08:20.237253   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.237262   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:20.237271   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:20.237284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:20.285008   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:20.285049   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:20.298974   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:20.299014   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:20.385765   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:20.385793   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:20.385807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:20.460715   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:20.460752   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.227947   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.228448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.229022   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.527785   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.528095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.528420   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.000163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:23.014755   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:23.014828   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:23.048877   61989 cri.go:89] found id: ""
	I0924 01:08:23.048909   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.048921   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:23.048979   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:23.049049   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:23.085614   61989 cri.go:89] found id: ""
	I0924 01:08:23.085643   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.085650   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:23.085658   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:23.085718   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:23.122027   61989 cri.go:89] found id: ""
	I0924 01:08:23.122060   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.122071   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:23.122078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:23.122136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:23.156838   61989 cri.go:89] found id: ""
	I0924 01:08:23.156868   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.156879   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:23.156887   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:23.156947   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:23.191528   61989 cri.go:89] found id: ""
	I0924 01:08:23.191569   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.191579   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:23.191586   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:23.191651   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:23.227627   61989 cri.go:89] found id: ""
	I0924 01:08:23.227651   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.227659   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:23.227665   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:23.227709   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:23.261937   61989 cri.go:89] found id: ""
	I0924 01:08:23.261968   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.261980   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:23.261988   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:23.262039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:23.297947   61989 cri.go:89] found id: ""
	I0924 01:08:23.297973   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.297986   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:23.297997   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:23.298009   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.337783   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:23.337811   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:23.390767   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:23.390808   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:23.404787   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:23.404814   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:23.478768   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:23.478788   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:23.478801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:25.728154   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.227795   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:25.529710   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.028153   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:26.060593   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:26.085071   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:26.085137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:26.121785   61989 cri.go:89] found id: ""
	I0924 01:08:26.121814   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.121826   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:26.121834   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:26.121900   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:26.167942   61989 cri.go:89] found id: ""
	I0924 01:08:26.167971   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.167980   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:26.167991   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:26.168054   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:26.206461   61989 cri.go:89] found id: ""
	I0924 01:08:26.206496   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.206506   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:26.206513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:26.206582   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:26.243094   61989 cri.go:89] found id: ""
	I0924 01:08:26.243125   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.243136   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:26.243144   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:26.243206   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:26.279303   61989 cri.go:89] found id: ""
	I0924 01:08:26.279331   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.279341   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:26.279348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:26.279407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:26.311840   61989 cri.go:89] found id: ""
	I0924 01:08:26.311869   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.311880   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:26.311888   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:26.311954   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:26.345994   61989 cri.go:89] found id: ""
	I0924 01:08:26.346019   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.346027   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:26.346033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:26.346082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:26.380570   61989 cri.go:89] found id: ""
	I0924 01:08:26.380601   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.380610   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:26.380619   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:26.380630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:26.429958   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:26.429993   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:26.443278   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:26.443312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:26.516353   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:26.516375   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:26.516390   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.603310   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:26.603345   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.142531   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:29.156548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:29.156634   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:29.191351   61989 cri.go:89] found id: ""
	I0924 01:08:29.191378   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.191389   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:29.191396   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:29.191451   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:29.232112   61989 cri.go:89] found id: ""
	I0924 01:08:29.232141   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.232152   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:29.232159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:29.232214   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:29.266082   61989 cri.go:89] found id: ""
	I0924 01:08:29.266104   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.266112   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:29.266118   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:29.266178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:29.299777   61989 cri.go:89] found id: ""
	I0924 01:08:29.299802   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.299812   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:29.299817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:29.299883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:29.342709   61989 cri.go:89] found id: ""
	I0924 01:08:29.342740   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.342749   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:29.342756   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:29.342816   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:29.381255   61989 cri.go:89] found id: ""
	I0924 01:08:29.381303   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.381312   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:29.381318   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:29.381375   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:29.414998   61989 cri.go:89] found id: ""
	I0924 01:08:29.415028   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.415036   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:29.415043   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:29.415101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:29.448553   61989 cri.go:89] found id: ""
	I0924 01:08:29.448580   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.448589   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:29.448598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:29.448608   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:29.534936   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:29.535001   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.573554   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:29.573584   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:29.623590   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:29.623626   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:29.636141   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:29.636167   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:29.700591   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:30.228993   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.229458   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:30.528150   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:33.029011   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.201184   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:32.215034   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:32.215102   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:32.250990   61989 cri.go:89] found id: ""
	I0924 01:08:32.251016   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.251026   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:32.251033   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:32.251104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:32.284448   61989 cri.go:89] found id: ""
	I0924 01:08:32.284483   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.284494   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:32.284504   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:32.284570   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:32.317979   61989 cri.go:89] found id: ""
	I0924 01:08:32.318004   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.318015   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:32.318022   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:32.318078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:32.352057   61989 cri.go:89] found id: ""
	I0924 01:08:32.352082   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.352093   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:32.352101   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:32.352163   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:32.385459   61989 cri.go:89] found id: ""
	I0924 01:08:32.385482   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.385490   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:32.385496   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:32.385544   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:32.421189   61989 cri.go:89] found id: ""
	I0924 01:08:32.421217   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.421227   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:32.421235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:32.421307   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:32.464375   61989 cri.go:89] found id: ""
	I0924 01:08:32.464399   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.464406   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:32.464412   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:32.464457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:32.512716   61989 cri.go:89] found id: ""
	I0924 01:08:32.512742   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.512753   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:32.512763   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:32.512788   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:32.598271   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.598293   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:32.598305   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:32.674197   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:32.674233   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:32.715065   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:32.715092   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:32.767522   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:32.767565   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.281678   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:35.296302   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:35.296390   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:35.336341   61989 cri.go:89] found id: ""
	I0924 01:08:35.336370   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.336381   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:35.336397   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:35.336454   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:35.373090   61989 cri.go:89] found id: ""
	I0924 01:08:35.373118   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.373127   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:35.373135   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:35.373201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:35.413628   61989 cri.go:89] found id: ""
	I0924 01:08:35.413660   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.413668   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:35.413674   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:35.413720   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:35.446564   61989 cri.go:89] found id: ""
	I0924 01:08:35.446592   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.446603   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:35.446610   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:35.446669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:35.478389   61989 cri.go:89] found id: ""
	I0924 01:08:35.478424   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.478435   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:35.478444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:35.478515   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:35.513992   61989 cri.go:89] found id: ""
	I0924 01:08:35.514015   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.514023   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:35.514029   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:35.514085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:35.556442   61989 cri.go:89] found id: ""
	I0924 01:08:35.556471   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.556481   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:35.556489   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:35.556571   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:35.594205   61989 cri.go:89] found id: ""
	I0924 01:08:35.594228   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.594236   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:35.594244   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:35.594254   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:35.637601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:35.637640   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:35.691674   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:35.691711   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.705223   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:35.705261   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:35.784000   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:35.784021   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:35.784036   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:34.729064   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:37.227314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:35.528382   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.028508   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.370232   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:38.383287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:38.383358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:38.417528   61989 cri.go:89] found id: ""
	I0924 01:08:38.417556   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.417564   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:38.417571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:38.417619   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:38.459788   61989 cri.go:89] found id: ""
	I0924 01:08:38.459814   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.459821   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:38.459828   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:38.459883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:38.494017   61989 cri.go:89] found id: ""
	I0924 01:08:38.494050   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.494059   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:38.494065   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:38.494135   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:38.526894   61989 cri.go:89] found id: ""
	I0924 01:08:38.526924   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.526935   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:38.526942   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:38.527000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:38.563831   61989 cri.go:89] found id: ""
	I0924 01:08:38.563859   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.563876   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:38.563884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:38.563950   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:38.596066   61989 cri.go:89] found id: ""
	I0924 01:08:38.596095   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.596106   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:38.596114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:38.596172   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:38.630123   61989 cri.go:89] found id: ""
	I0924 01:08:38.630147   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.630157   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:38.630165   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:38.630223   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:38.664714   61989 cri.go:89] found id: ""
	I0924 01:08:38.664743   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.664754   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:38.664765   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:38.664782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:38.718770   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:38.718802   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:38.732878   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:38.732906   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:38.806441   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:38.806469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:38.806485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.884416   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:38.884456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:39.228048   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.228574   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:40.527354   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:42.528592   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.423782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:41.436827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:41.436899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:41.468283   61989 cri.go:89] found id: ""
	I0924 01:08:41.468316   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.468342   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:41.468353   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:41.468412   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:41.504348   61989 cri.go:89] found id: ""
	I0924 01:08:41.504380   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.504402   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:41.504410   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:41.504470   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:41.544785   61989 cri.go:89] found id: ""
	I0924 01:08:41.544809   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.544818   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:41.544825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:41.544883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:41.582924   61989 cri.go:89] found id: ""
	I0924 01:08:41.582954   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.582965   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:41.582973   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:41.583037   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:41.618220   61989 cri.go:89] found id: ""
	I0924 01:08:41.618243   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.618253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:41.618260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:41.618329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:41.653369   61989 cri.go:89] found id: ""
	I0924 01:08:41.653392   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.653400   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:41.653416   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:41.653477   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:41.687036   61989 cri.go:89] found id: ""
	I0924 01:08:41.687058   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.687069   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:41.687077   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:41.687133   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:41.720701   61989 cri.go:89] found id: ""
	I0924 01:08:41.720732   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.720744   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:41.720756   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:41.720776   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:41.798436   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:41.798486   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.842639   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:41.842674   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:41.893053   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:41.893086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:41.907757   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:41.907784   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:41.973466   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.474071   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:44.487057   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:44.487119   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:44.521772   61989 cri.go:89] found id: ""
	I0924 01:08:44.521813   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.521835   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:44.521843   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:44.521905   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:44.554928   61989 cri.go:89] found id: ""
	I0924 01:08:44.554956   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.554966   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:44.554977   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:44.555042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:44.594246   61989 cri.go:89] found id: ""
	I0924 01:08:44.594279   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.594292   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:44.594298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:44.594344   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:44.629779   61989 cri.go:89] found id: ""
	I0924 01:08:44.629807   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.629819   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:44.629827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:44.629884   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:44.671671   61989 cri.go:89] found id: ""
	I0924 01:08:44.671694   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.671701   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:44.671707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:44.671772   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:44.710875   61989 cri.go:89] found id: ""
	I0924 01:08:44.710910   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.710922   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:44.710931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:44.711000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:44.744345   61989 cri.go:89] found id: ""
	I0924 01:08:44.744381   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.744389   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:44.744395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:44.744442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:44.780771   61989 cri.go:89] found id: ""
	I0924 01:08:44.780797   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.780804   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:44.780812   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:44.780824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:44.834902   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:44.834958   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:44.848503   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:44.848540   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:44.923117   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.923138   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:44.923150   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:45.003806   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:45.003840   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.184585   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.282824063s)
	I0924 01:08:46.184659   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:46.201715   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:46.215637   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:46.228701   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:46.228726   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:46.228769   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:46.239005   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:46.239065   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:46.250336   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:46.259889   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:46.259961   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:46.271773   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.283106   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:46.283175   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.293325   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:46.306026   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:46.306111   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:46.318859   61323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:46.373819   61323 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:08:46.373973   61323 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:46.487006   61323 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:46.487146   61323 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:46.487299   61323 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:08:46.495557   61323 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:46.497537   61323 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:46.497645   61323 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:46.497732   61323 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:46.497853   61323 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:46.497946   61323 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:46.498041   61323 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:46.498116   61323 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:46.498197   61323 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:46.498280   61323 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:46.498389   61323 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:46.498490   61323 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:46.498547   61323 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:46.498623   61323 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:46.714556   61323 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:46.815030   61323 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:08:47.011082   61323 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:47.227052   61323 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:47.488776   61323 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:47.489403   61323 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:47.491864   61323 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:43.728646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:46.234412   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029064   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029109   61699 pod_ready.go:82] duration metric: took 4m0.007887151s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:45.029124   61699 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:08:45.029133   61699 pod_ready.go:39] duration metric: took 4m5.860472412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:45.029153   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:08:45.029189   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:45.029267   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:45.084875   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:45.084899   61699 cri.go:89] found id: ""
	I0924 01:08:45.084907   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:45.084955   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.089534   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:45.089601   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:45.133457   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:45.133479   61699 cri.go:89] found id: ""
	I0924 01:08:45.133486   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:45.133544   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.137523   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:45.137586   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:45.173989   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.174014   61699 cri.go:89] found id: ""
	I0924 01:08:45.174023   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:45.174083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.178084   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:45.178168   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:45.215763   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:45.215790   61699 cri.go:89] found id: ""
	I0924 01:08:45.215799   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:45.215851   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.220052   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:45.220137   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:45.258186   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.258206   61699 cri.go:89] found id: ""
	I0924 01:08:45.258213   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:45.258272   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.262402   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:45.262481   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:45.299355   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.299385   61699 cri.go:89] found id: ""
	I0924 01:08:45.299397   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:45.299452   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.303404   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:45.303505   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:45.341412   61699 cri.go:89] found id: ""
	I0924 01:08:45.341438   61699 logs.go:276] 0 containers: []
	W0924 01:08:45.341446   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:45.341452   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:45.341508   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:45.377419   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:45.377450   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:45.377457   61699 cri.go:89] found id: ""
	I0924 01:08:45.377471   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:45.377539   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.381497   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.385182   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:45.385201   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:45.455618   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:45.455661   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.495007   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:45.495037   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.530196   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:45.530230   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.581671   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:45.581709   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:46.122674   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:46.122717   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.169928   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:46.169965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:46.184617   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:46.184645   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:46.330482   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:46.330512   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:46.382927   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:46.382965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:46.441408   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:46.441442   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:46.484956   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:46.484985   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:46.522559   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:46.522595   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.064954   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:49.086621   61699 api_server.go:72] duration metric: took 4m15.650065328s to wait for apiserver process to appear ...
	I0924 01:08:49.086648   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:08:49.086695   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:49.086760   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:47.541843   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:47.555428   61989 kubeadm.go:597] duration metric: took 4m2.297219084s to restartPrimaryControlPlane
	W0924 01:08:47.555528   61989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:47.555560   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:49.123410   61989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.567825503s)
	I0924 01:08:49.123501   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:49.142686   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:49.154484   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:49.166734   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:49.166759   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:49.166813   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:49.178374   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:49.178517   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:49.188871   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:49.200190   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:49.200258   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:49.212895   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.225205   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:49.225276   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.237828   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:49.249686   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:49.249751   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:49.262505   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:49.338624   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:08:49.338712   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:49.509271   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:49.509489   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:49.509636   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:08:49.724434   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:47.494323   61323 out.go:235]   - Booting up control plane ...
	I0924 01:08:47.494449   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:47.494527   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:47.494904   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:47.511889   61323 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:47.518272   61323 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:47.518343   61323 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:47.654121   61323 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:08:47.654273   61323 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:08:48.156008   61323 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075879ms
	I0924 01:08:48.156089   61323 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:08:49.726458   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:49.726563   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:49.726639   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:49.726737   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:49.726812   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:49.727078   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:49.727375   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:49.728123   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:49.729254   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:49.730178   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:49.732548   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:49.732604   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:49.732676   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:49.938623   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:50.774207   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:51.022535   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:51.148690   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:51.168786   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:51.170070   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:51.170150   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:51.342671   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:48.729168   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:50.729197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:52.729615   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:53.660805   61323 kubeadm.go:310] [api-check] The API server is healthy after 5.502700892s
	I0924 01:08:53.678006   61323 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:08:53.693676   61323 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:08:53.736910   61323 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:08:53.737186   61323 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-650507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:08:53.750738   61323 kubeadm.go:310] [bootstrap-token] Using token: 62empn.zvptxpa69xtioeo1
	I0924 01:08:49.139835   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.139859   61699 cri.go:89] found id: ""
	I0924 01:08:49.139869   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:49.139920   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.144770   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:49.144896   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:49.193710   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:49.193733   61699 cri.go:89] found id: ""
	I0924 01:08:49.193743   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:49.193798   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.198090   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:49.198178   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:49.240236   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:49.240309   61699 cri.go:89] found id: ""
	I0924 01:08:49.240344   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:49.240401   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.244573   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:49.244646   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:49.290954   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:49.290998   61699 cri.go:89] found id: ""
	I0924 01:08:49.291008   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:49.291083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.295602   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:49.295664   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:49.340871   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.340894   61699 cri.go:89] found id: ""
	I0924 01:08:49.340904   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:49.340964   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.345362   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:49.345433   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:49.387382   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.387408   61699 cri.go:89] found id: ""
	I0924 01:08:49.387418   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:49.387472   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.393388   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:49.393468   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:49.436082   61699 cri.go:89] found id: ""
	I0924 01:08:49.436107   61699 logs.go:276] 0 containers: []
	W0924 01:08:49.436119   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:49.436126   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:49.436187   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:49.490172   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:49.490197   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.490203   61699 cri.go:89] found id: ""
	I0924 01:08:49.490213   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:49.490273   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.495438   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.500506   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:49.500537   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.561240   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:49.561277   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.611765   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:49.611807   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.689366   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:49.689413   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:49.747233   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:49.747271   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:49.852723   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:49.852771   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:50.006274   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:50.006322   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:50.064786   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:50.064828   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:50.104831   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:50.104865   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:50.144962   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:50.144990   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:50.183923   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:50.183956   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:50.222382   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:50.222414   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:50.671849   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:50.671890   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.187450   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:08:53.193094   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:08:53.194414   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:08:53.194439   61699 api_server.go:131] duration metric: took 4.107783011s to wait for apiserver health ...
	I0924 01:08:53.194449   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:08:53.194479   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:53.194546   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:53.232560   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:53.232584   61699 cri.go:89] found id: ""
	I0924 01:08:53.232594   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:53.232649   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.236611   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:53.236671   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:53.278194   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.278223   61699 cri.go:89] found id: ""
	I0924 01:08:53.278233   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:53.278291   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.283330   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:53.283391   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:53.322371   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.322399   61699 cri.go:89] found id: ""
	I0924 01:08:53.322408   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:53.322459   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.326794   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:53.326869   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:53.364035   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.364064   61699 cri.go:89] found id: ""
	I0924 01:08:53.364075   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:53.364140   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.368065   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:53.368127   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:53.405651   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.405679   61699 cri.go:89] found id: ""
	I0924 01:08:53.405687   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:53.405754   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.410451   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:53.410537   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:53.451079   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:53.451111   61699 cri.go:89] found id: ""
	I0924 01:08:53.451121   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:53.451183   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.456272   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:53.456367   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:53.497323   61699 cri.go:89] found id: ""
	I0924 01:08:53.497360   61699 logs.go:276] 0 containers: []
	W0924 01:08:53.497373   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:53.497387   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:53.497461   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:53.536017   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:53.536040   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:53.536046   61699 cri.go:89] found id: ""
	I0924 01:08:53.536055   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:53.536114   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.542413   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.546559   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:53.546592   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.560292   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:53.560323   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:53.684820   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:53.684849   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.734483   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:53.734519   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.780676   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:53.780705   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:53.855917   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:53.855960   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.906926   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:53.906962   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.953992   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:53.954019   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:54.031302   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:54.031350   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:54.073918   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:54.073958   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:54.108724   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:54.108765   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:53.752460   61323 out.go:235]   - Configuring RBAC rules ...
	I0924 01:08:53.752626   61323 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:08:53.758889   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:08:53.767101   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:08:53.770943   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:08:53.775335   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:08:53.792963   61323 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:08:54.070193   61323 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:08:54.526226   61323 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:08:55.069668   61323 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:08:55.070678   61323 kubeadm.go:310] 
	I0924 01:08:55.070751   61323 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:08:55.070761   61323 kubeadm.go:310] 
	I0924 01:08:55.070844   61323 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:08:55.070860   61323 kubeadm.go:310] 
	I0924 01:08:55.070910   61323 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:08:55.070998   61323 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:08:55.071064   61323 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:08:55.071074   61323 kubeadm.go:310] 
	I0924 01:08:55.071138   61323 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:08:55.071159   61323 kubeadm.go:310] 
	I0924 01:08:55.071210   61323 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:08:55.071217   61323 kubeadm.go:310] 
	I0924 01:08:55.071298   61323 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:08:55.071428   61323 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:08:55.071525   61323 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:08:55.071535   61323 kubeadm.go:310] 
	I0924 01:08:55.071640   61323 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:08:55.071721   61323 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:08:55.071738   61323 kubeadm.go:310] 
	I0924 01:08:55.071815   61323 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.071941   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:08:55.071971   61323 kubeadm.go:310] 	--control-plane 
	I0924 01:08:55.071984   61323 kubeadm.go:310] 
	I0924 01:08:55.072089   61323 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:08:55.072098   61323 kubeadm.go:310] 
	I0924 01:08:55.072198   61323 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.072324   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:08:55.073807   61323 kubeadm.go:310] W0924 01:08:46.350959    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074118   61323 kubeadm.go:310] W0924 01:08:46.352529    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074256   61323 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:08:55.074295   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:08:55.074312   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:08:55.076241   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:08:55.077630   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:08:55.088658   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:08:55.106396   61323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:08:55.106491   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.106579   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-650507 minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=embed-certs-650507 minikube.k8s.io/primary=true
	I0924 01:08:55.138376   61323 ops.go:34] apiserver oom_adj: -16
	I0924 01:08:51.344458   61989 out.go:235]   - Booting up control plane ...
	I0924 01:08:51.344607   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:51.353468   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:51.356949   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:51.358082   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:51.364468   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:08:54.501805   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:54.501847   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:54.548768   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:54.548800   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:57.105661   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:08:57.105688   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.105693   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.105697   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.105703   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.105706   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.105709   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.105715   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.105722   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.105729   61699 system_pods.go:74] duration metric: took 3.911274774s to wait for pod list to return data ...
	I0924 01:08:57.105736   61699 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:08:57.108031   61699 default_sa.go:45] found service account: "default"
	I0924 01:08:57.108051   61699 default_sa.go:55] duration metric: took 2.307712ms for default service account to be created ...
	I0924 01:08:57.108059   61699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:08:57.112551   61699 system_pods.go:86] 8 kube-system pods found
	I0924 01:08:57.112578   61699 system_pods.go:89] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.112584   61699 system_pods.go:89] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.112589   61699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.112593   61699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.112597   61699 system_pods.go:89] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.112600   61699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.112608   61699 system_pods.go:89] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.112613   61699 system_pods.go:89] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.112619   61699 system_pods.go:126] duration metric: took 4.555185ms to wait for k8s-apps to be running ...
	I0924 01:08:57.112625   61699 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:08:57.112665   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:57.127805   61699 system_svc.go:56] duration metric: took 15.170368ms WaitForService to wait for kubelet
	I0924 01:08:57.127839   61699 kubeadm.go:582] duration metric: took 4m23.691287248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:08:57.127865   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:08:57.130964   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:08:57.130994   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:08:57.131008   61699 node_conditions.go:105] duration metric: took 3.13793ms to run NodePressure ...
	I0924 01:08:57.131021   61699 start.go:241] waiting for startup goroutines ...
	I0924 01:08:57.131029   61699 start.go:246] waiting for cluster config update ...
	I0924 01:08:57.131043   61699 start.go:255] writing updated cluster config ...
	I0924 01:08:57.131388   61699 ssh_runner.go:195] Run: rm -f paused
	I0924 01:08:57.182238   61699 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:08:57.185023   61699 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-465341" cluster and "default" namespace by default
	I0924 01:08:55.229370   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:57.729448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:55.285390   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.785813   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.285570   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.785779   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.285599   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.786401   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.285583   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.786037   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.286404   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.447075   61323 kubeadm.go:1113] duration metric: took 4.340646509s to wait for elevateKubeSystemPrivileges
	I0924 01:08:59.447119   61323 kubeadm.go:394] duration metric: took 4m57.777127509s to StartCluster
	I0924 01:08:59.447141   61323 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.447229   61323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:08:59.449766   61323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.450091   61323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:08:59.450191   61323 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:08:59.450308   61323 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650507"
	I0924 01:08:59.450330   61323 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-650507"
	W0924 01:08:59.450343   61323 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:08:59.450346   61323 addons.go:69] Setting metrics-server=true in profile "embed-certs-650507"
	I0924 01:08:59.450349   61323 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650507"
	I0924 01:08:59.450366   61323 addons.go:234] Setting addon metrics-server=true in "embed-certs-650507"
	W0924 01:08:59.450374   61323 addons.go:243] addon metrics-server should already be in state true
	I0924 01:08:59.450328   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:08:59.450381   61323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650507"
	I0924 01:08:59.450404   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450375   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450718   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450770   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450805   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450808   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450895   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450842   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.451862   61323 out.go:177] * Verifying Kubernetes components...
	I0924 01:08:59.453214   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:08:59.471878   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0924 01:08:59.472083   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0924 01:08:59.472239   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0924 01:08:59.472586   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472704   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472988   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.473187   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473205   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473226   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473242   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473418   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473433   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474003   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.474116   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474383   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474422   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.474591   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474628   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.478726   61323 addons.go:234] Setting addon default-storageclass=true in "embed-certs-650507"
	W0924 01:08:59.478753   61323 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:08:59.478784   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.479137   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.479186   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.495021   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I0924 01:08:59.495527   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.496068   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.496090   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.496519   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.496734   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.498472   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.498528   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0924 01:08:59.498971   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.499485   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.499498   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.499794   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.499918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.500899   61323 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:08:59.501731   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.502154   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:08:59.502172   61323 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:08:59.502186   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.503238   61323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:08:59.504765   61323 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.504783   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:08:59.504801   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.505483   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0924 01:08:59.505882   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.506386   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.506408   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.506841   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.507463   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.507505   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.511098   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511611   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.511645   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511944   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.512127   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.512296   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.512493   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.514595   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515144   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.515173   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515481   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.515749   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.515963   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.516100   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.529920   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0924 01:08:59.530565   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.531102   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.531125   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.531629   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.531918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.533741   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.533992   61323 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.534007   61323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:08:59.534026   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.537032   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537488   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.537515   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537713   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.537919   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.538074   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.538198   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.680683   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:08:59.711414   61323 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721234   61323 node_ready.go:49] node "embed-certs-650507" has status "Ready":"True"
	I0924 01:08:59.721264   61323 node_ready.go:38] duration metric: took 9.820004ms for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721275   61323 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:59.736353   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:08:59.831004   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:08:59.831041   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:08:59.871681   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.873844   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.902662   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:08:59.902691   61323 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:08:59.956425   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:08:59.956454   61323 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:08:59.997902   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:09:01.146340   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.27245536s)
	I0924 01:09:01.146470   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146505   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146403   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.274685832s)
	I0924 01:09:01.146582   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146602   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146819   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146848   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146868   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146875   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.146882   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146892   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146967   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146990   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147007   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.147023   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.147084   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.147117   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147133   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147370   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147378   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180574   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.180604   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.180929   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180977   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.180986   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.207538   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209569759s)
	I0924 01:09:01.207600   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.207616   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.207959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.208002   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208011   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208019   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.208028   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.208377   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208402   61323 addons.go:475] Verifying addon metrics-server=true in "embed-certs-650507"
	I0924 01:09:01.208411   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.210500   61323 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:08:59.731184   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:02.229737   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:01.211900   61323 addons.go:510] duration metric: took 1.761718139s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:09:01.751463   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.242260   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.728708   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.728878   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.243002   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:06.243030   61323 pod_ready.go:82] duration metric: took 6.506649267s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:06.243039   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:08.249949   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:09.750009   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.750037   61323 pod_ready.go:82] duration metric: took 3.506990291s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.750049   61323 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756600   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.756626   61323 pod_ready.go:82] duration metric: took 6.570047ms for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756635   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762209   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.762235   61323 pod_ready.go:82] duration metric: took 5.593257ms for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762248   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772052   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.772075   61323 pod_ready.go:82] duration metric: took 9.818627ms for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772088   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777733   61323 pod_ready.go:93] pod "kube-proxy-mwtkg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.777765   61323 pod_ready.go:82] duration metric: took 5.669609ms for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777778   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146804   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:10.146833   61323 pod_ready.go:82] duration metric: took 369.046066ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146844   61323 pod_ready.go:39] duration metric: took 10.425557831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:09:10.146861   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:09:10.146918   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:09:10.162335   61323 api_server.go:72] duration metric: took 10.712204486s to wait for apiserver process to appear ...
	I0924 01:09:10.162360   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:09:10.162381   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:09:10.166693   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:09:10.167700   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:09:10.167723   61323 api_server.go:131] duration metric: took 5.355716ms to wait for apiserver health ...
	I0924 01:09:10.167734   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:09:10.351584   61323 system_pods.go:59] 9 kube-system pods found
	I0924 01:09:10.351621   61323 system_pods.go:61] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.351629   61323 system_pods.go:61] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.351634   61323 system_pods.go:61] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.351640   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.351645   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.351650   61323 system_pods.go:61] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.351655   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.351669   61323 system_pods.go:61] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.351678   61323 system_pods.go:61] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.351692   61323 system_pods.go:74] duration metric: took 183.950994ms to wait for pod list to return data ...
	I0924 01:09:10.351704   61323 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:09:10.547564   61323 default_sa.go:45] found service account: "default"
	I0924 01:09:10.547595   61323 default_sa.go:55] duration metric: took 195.882659ms for default service account to be created ...
	I0924 01:09:10.547605   61323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:09:10.750290   61323 system_pods.go:86] 9 kube-system pods found
	I0924 01:09:10.750327   61323 system_pods.go:89] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.750336   61323 system_pods.go:89] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.750344   61323 system_pods.go:89] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.750352   61323 system_pods.go:89] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.750359   61323 system_pods.go:89] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.750366   61323 system_pods.go:89] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.750372   61323 system_pods.go:89] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.750382   61323 system_pods.go:89] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.750391   61323 system_pods.go:89] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.750407   61323 system_pods.go:126] duration metric: took 202.795975ms to wait for k8s-apps to be running ...
	I0924 01:09:10.750416   61323 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:09:10.750476   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:09:10.765539   61323 system_svc.go:56] duration metric: took 15.112281ms WaitForService to wait for kubelet
	I0924 01:09:10.765569   61323 kubeadm.go:582] duration metric: took 11.31544538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:09:10.765588   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:09:10.947628   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:09:10.947654   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:09:10.947664   61323 node_conditions.go:105] duration metric: took 182.072269ms to run NodePressure ...
	I0924 01:09:10.947674   61323 start.go:241] waiting for startup goroutines ...
	I0924 01:09:10.947681   61323 start.go:246] waiting for cluster config update ...
	I0924 01:09:10.947691   61323 start.go:255] writing updated cluster config ...
	I0924 01:09:10.947955   61323 ssh_runner.go:195] Run: rm -f paused
	I0924 01:09:10.999208   61323 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:09:11.001392   61323 out.go:177] * Done! kubectl is now configured to use "embed-certs-650507" cluster and "default" namespace by default
	I0924 01:09:08.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:11.229036   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:13.727852   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:16.229362   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:18.727643   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:20.729183   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:22.731323   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:25.228514   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:27.728747   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:29.729150   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:32.228197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:31.365725   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:09:31.366444   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:31.366704   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:34.729441   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:37.228766   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:36.367209   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:36.367654   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:39.728035   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:41.729148   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:43.729240   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.228006   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:48.228134   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.367945   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:46.368128   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:50.228455   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:52.228646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:54.229158   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:56.727712   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:58.728522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:00.728964   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:02.729909   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:05.227781   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:07.228729   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:06.368912   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:06.369182   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:09.728977   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:10.222284   61070 pod_ready.go:82] duration metric: took 4m0.000274516s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:10:10.222354   61070 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:10:10.222381   61070 pod_ready.go:39] duration metric: took 4m12.043944079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:10.222412   61070 kubeadm.go:597] duration metric: took 4m56.454037737s to restartPrimaryControlPlane
	W0924 01:10:10.222488   61070 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:10:10.222536   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:36.533302   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.310734731s)
	I0924 01:10:36.533377   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:36.556961   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:10:36.568298   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:36.584098   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:36.584121   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:36.584178   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:36.594153   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:36.594218   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:36.612646   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:36.626433   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:36.626506   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:36.636161   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.654017   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:36.654075   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.663760   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:36.673737   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:36.673799   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:36.684005   61070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:36.731568   61070 kubeadm.go:310] W0924 01:10:36.713557    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.733592   61070 kubeadm.go:310] W0924 01:10:36.715675    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.850767   61070 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:45.349295   61070 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:10:45.349386   61070 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:45.349486   61070 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:45.349600   61070 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:45.349733   61070 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:10:45.349836   61070 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:45.351746   61070 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:45.351843   61070 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:45.351939   61070 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:45.352055   61070 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:45.352160   61070 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:45.352228   61070 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:45.352297   61070 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:45.352392   61070 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:45.352477   61070 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:45.352551   61070 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:45.352664   61070 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:45.352734   61070 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:45.352904   61070 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:45.352956   61070 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:45.353038   61070 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:10:45.353127   61070 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:45.353209   61070 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:45.353300   61070 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:45.353372   61070 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:45.353446   61070 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.354948   61070 out.go:235]   - Booting up control plane ...
	I0924 01:10:45.355023   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:45.355090   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:45.355144   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:45.355226   61070 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:45.355310   61070 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:45.355356   61070 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:45.355476   61070 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:10:45.355585   61070 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:10:45.355658   61070 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537437s
	I0924 01:10:45.355728   61070 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:10:45.355807   61070 kubeadm.go:310] [api-check] The API server is healthy after 5.002387582s
	I0924 01:10:45.355955   61070 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:10:45.356129   61070 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:10:45.356230   61070 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:10:45.356516   61070 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-674057 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:10:45.356571   61070 kubeadm.go:310] [bootstrap-token] Using token: g2v97n.iz49hjb4wh5cfkiq
	I0924 01:10:45.358203   61070 out.go:235]   - Configuring RBAC rules ...
	I0924 01:10:45.358333   61070 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:10:45.358439   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:10:45.358562   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:10:45.358667   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:10:45.358773   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:10:45.358851   61070 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:10:45.358997   61070 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:10:45.359059   61070 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:10:45.359101   61070 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:10:45.359111   61070 kubeadm.go:310] 
	I0924 01:10:45.359164   61070 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:10:45.359171   61070 kubeadm.go:310] 
	I0924 01:10:45.359263   61070 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:10:45.359280   61070 kubeadm.go:310] 
	I0924 01:10:45.359309   61070 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:10:45.359387   61070 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:10:45.359458   61070 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:10:45.359471   61070 kubeadm.go:310] 
	I0924 01:10:45.359559   61070 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:10:45.359568   61070 kubeadm.go:310] 
	I0924 01:10:45.359613   61070 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:10:45.359619   61070 kubeadm.go:310] 
	I0924 01:10:45.359665   61070 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:10:45.359728   61070 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:10:45.359800   61070 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:10:45.359813   61070 kubeadm.go:310] 
	I0924 01:10:45.359879   61070 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:10:45.359978   61070 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:10:45.359996   61070 kubeadm.go:310] 
	I0924 01:10:45.360101   61070 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360218   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:10:45.360251   61070 kubeadm.go:310] 	--control-plane 
	I0924 01:10:45.360258   61070 kubeadm.go:310] 
	I0924 01:10:45.360453   61070 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:10:45.360481   61070 kubeadm.go:310] 
	I0924 01:10:45.360595   61070 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360693   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:10:45.360706   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:10:45.360713   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:10:45.362153   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:10:46.371109   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:46.371309   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371318   61989 kubeadm.go:310] 
	I0924 01:10:46.371352   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:10:46.371455   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:10:46.371490   61989 kubeadm.go:310] 
	I0924 01:10:46.371546   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:10:46.371592   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:10:46.371734   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:10:46.371751   61989 kubeadm.go:310] 
	I0924 01:10:46.371888   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:10:46.371936   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:10:46.371978   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:10:46.371988   61989 kubeadm.go:310] 
	I0924 01:10:46.372124   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:10:46.372253   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:10:46.372262   61989 kubeadm.go:310] 
	I0924 01:10:46.372442   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:10:46.372578   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:10:46.372680   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:10:46.372756   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:10:46.372765   61989 kubeadm.go:310] 
	I0924 01:10:46.373578   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:46.373675   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:10:46.373790   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 01:10:46.373938   61989 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 01:10:46.373987   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:46.834432   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:46.851214   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:46.862648   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:46.862675   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:46.862733   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:46.873005   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:46.873073   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:46.884007   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:46.893944   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:46.894016   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:46.905036   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.914953   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:46.915024   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.924881   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:46.935132   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:46.935192   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:46.945837   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:47.018713   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:10:47.018861   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:47.159920   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:47.160042   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:47.160168   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:10:47.349360   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:47.351645   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:47.351763   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:47.351918   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:47.352040   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:47.352118   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:47.352205   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:47.352298   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:47.352401   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:47.352481   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:47.352574   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:47.352662   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:47.352705   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:47.352767   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:47.467301   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:47.622085   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:47.726807   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:47.951249   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:47.973392   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:47.974396   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:47.974440   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:48.127629   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.363348   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:10:45.374505   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:10:45.391838   61070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:10:45.391947   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:45.391999   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-674057 minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=no-preload-674057 minikube.k8s.io/primary=true
	I0924 01:10:45.583482   61070 ops.go:34] apiserver oom_adj: -16
	I0924 01:10:45.583498   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.083831   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.583990   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.084184   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.583925   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.083775   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.583654   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.084305   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.584636   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.084620   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.226320   61070 kubeadm.go:1113] duration metric: took 4.834429832s to wait for elevateKubeSystemPrivileges
	I0924 01:10:50.226363   61070 kubeadm.go:394] duration metric: took 5m36.514145334s to StartCluster
	I0924 01:10:50.226386   61070 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.226480   61070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:10:50.229196   61070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.229530   61070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:10:50.229600   61070 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:10:50.229703   61070 addons.go:69] Setting storage-provisioner=true in profile "no-preload-674057"
	I0924 01:10:50.229725   61070 addons.go:234] Setting addon storage-provisioner=true in "no-preload-674057"
	W0924 01:10:50.229733   61070 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:10:50.229735   61070 addons.go:69] Setting default-storageclass=true in profile "no-preload-674057"
	I0924 01:10:50.229756   61070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-674057"
	I0924 01:10:50.229764   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.229789   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:10:50.229781   61070 addons.go:69] Setting metrics-server=true in profile "no-preload-674057"
	I0924 01:10:50.229847   61070 addons.go:234] Setting addon metrics-server=true in "no-preload-674057"
	W0924 01:10:50.229855   61070 addons.go:243] addon metrics-server should already be in state true
	I0924 01:10:50.229871   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.230228   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230268   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230320   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230351   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230355   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230389   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.231531   61070 out.go:177] * Verifying Kubernetes components...
	I0924 01:10:50.233506   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:10:50.252485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0924 01:10:50.252844   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0924 01:10:50.253068   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.253217   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0924 01:10:50.253653   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.253676   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.253705   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254050   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254203   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254236   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254250   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.254591   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254814   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.254829   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254851   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.254864   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.254984   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.255389   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.255983   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.256028   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.258757   61070 addons.go:234] Setting addon default-storageclass=true in "no-preload-674057"
	W0924 01:10:50.258781   61070 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:10:50.258861   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.259186   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.259237   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.276636   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0924 01:10:50.276806   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0924 01:10:50.277196   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277312   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277771   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.277795   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278022   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.278044   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278213   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278380   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.278485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0924 01:10:50.278806   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278877   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.279106   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.279244   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.279260   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.279668   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.280215   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.280263   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.280315   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.281796   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.282123   61070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:10:50.283561   61070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:10:48.129312   61989 out.go:235]   - Booting up control plane ...
	I0924 01:10:48.129446   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:48.139821   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:48.143120   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:48.144038   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:48.146275   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:10:50.283656   61070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.283674   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:10:50.283688   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.284778   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:10:50.284793   61070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:10:50.284811   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.288106   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288477   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.288498   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288524   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288679   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.288867   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289019   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.289185   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.289309   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.289338   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.289613   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.289773   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289938   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.290073   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.323722   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0924 01:10:50.324221   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.324873   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.324901   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.325334   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.325572   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.327779   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.328071   61070 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.328092   61070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:10:50.328119   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.331721   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.331988   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.332022   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.332209   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.332455   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.332658   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.332837   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.471507   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:10:50.502289   61070 node_ready.go:35] waiting up to 6m0s for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522752   61070 node_ready.go:49] node "no-preload-674057" has status "Ready":"True"
	I0924 01:10:50.522784   61070 node_ready.go:38] duration metric: took 20.46398ms for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522797   61070 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:50.537297   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:50.576703   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.638655   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:10:50.638679   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:10:50.673535   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.691443   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:10:50.691475   61070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:10:50.791572   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:50.791596   61070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:10:50.887143   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:51.535179   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535211   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535247   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535269   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535531   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535553   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535563   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535571   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535572   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535584   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535591   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535598   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535809   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535830   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.536069   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.536078   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.536088   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.563511   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.563537   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.563856   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.563880   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.800860   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.800889   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801192   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801211   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801224   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.801233   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801527   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.801559   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801567   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801577   61070 addons.go:475] Verifying addon metrics-server=true in "no-preload-674057"
	I0924 01:10:51.803735   61070 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:10:51.805581   61070 addons.go:510] duration metric: took 1.575985263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:10:52.544028   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:53.564056   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.564089   61070 pod_ready.go:82] duration metric: took 3.026767371s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.564102   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573039   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.573076   61070 pod_ready.go:82] duration metric: took 8.965144ms for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573090   61070 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081080   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.081105   61070 pod_ready.go:82] duration metric: took 508.007072ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081115   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087054   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.087079   61070 pod_ready.go:82] duration metric: took 5.957569ms for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087091   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094018   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.094043   61070 pod_ready.go:82] duration metric: took 6.944048ms for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094053   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341307   61070 pod_ready.go:93] pod "kube-proxy-k54d7" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.341326   61070 pod_ready.go:82] duration metric: took 247.267987ms for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341335   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741702   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.741732   61070 pod_ready.go:82] duration metric: took 400.389532ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741742   61070 pod_ready.go:39] duration metric: took 4.218931841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:54.741759   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:10:54.741827   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:10:54.758692   61070 api_server.go:72] duration metric: took 4.529120201s to wait for apiserver process to appear ...
	I0924 01:10:54.758723   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:10:54.758744   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:10:54.764587   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:10:54.765620   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:10:54.765639   61070 api_server.go:131] duration metric: took 6.909845ms to wait for apiserver health ...
	I0924 01:10:54.765646   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:10:54.945080   61070 system_pods.go:59] 9 kube-system pods found
	I0924 01:10:54.945121   61070 system_pods.go:61] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:54.945128   61070 system_pods.go:61] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:54.945134   61070 system_pods.go:61] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:54.945140   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:54.945145   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:54.945150   61070 system_pods.go:61] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:54.945161   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:54.945172   61070 system_pods.go:61] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:54.945180   61070 system_pods.go:61] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:54.945191   61070 system_pods.go:74] duration metric: took 179.539019ms to wait for pod list to return data ...
	I0924 01:10:54.945205   61070 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:10:55.141944   61070 default_sa.go:45] found service account: "default"
	I0924 01:10:55.141973   61070 default_sa.go:55] duration metric: took 196.760922ms for default service account to be created ...
	I0924 01:10:55.141984   61070 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:10:55.344235   61070 system_pods.go:86] 9 kube-system pods found
	I0924 01:10:55.344273   61070 system_pods.go:89] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:55.344282   61070 system_pods.go:89] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:55.344288   61070 system_pods.go:89] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:55.344293   61070 system_pods.go:89] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:55.344297   61070 system_pods.go:89] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:55.344301   61070 system_pods.go:89] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:55.344304   61070 system_pods.go:89] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:55.344310   61070 system_pods.go:89] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:55.344315   61070 system_pods.go:89] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:55.344324   61070 system_pods.go:126] duration metric: took 202.334823ms to wait for k8s-apps to be running ...
	I0924 01:10:55.344361   61070 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:10:55.344406   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:55.361050   61070 system_svc.go:56] duration metric: took 16.6812ms WaitForService to wait for kubelet
	I0924 01:10:55.361082   61070 kubeadm.go:582] duration metric: took 5.13151522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:10:55.361104   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:10:55.541766   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:10:55.541799   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:10:55.541812   61070 node_conditions.go:105] duration metric: took 180.702708ms to run NodePressure ...
	I0924 01:10:55.541826   61070 start.go:241] waiting for startup goroutines ...
	I0924 01:10:55.541837   61070 start.go:246] waiting for cluster config update ...
	I0924 01:10:55.541850   61070 start.go:255] writing updated cluster config ...
	I0924 01:10:55.542100   61070 ssh_runner.go:195] Run: rm -f paused
	I0924 01:10:55.590629   61070 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:10:55.592850   61070 out.go:177] * Done! kubectl is now configured to use "no-preload-674057" cluster and "default" namespace by default
	I0924 01:11:28.148929   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:11:28.149086   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:28.149360   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:33.150102   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:33.150283   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:43.151281   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:43.151540   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:03.152338   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:03.152562   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151221   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:43.151503   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151532   61989 kubeadm.go:310] 
	I0924 01:12:43.151585   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:12:43.151645   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:12:43.151655   61989 kubeadm.go:310] 
	I0924 01:12:43.151729   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:12:43.151779   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:12:43.151940   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:12:43.151954   61989 kubeadm.go:310] 
	I0924 01:12:43.152095   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:12:43.152154   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:12:43.152201   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:12:43.152207   61989 kubeadm.go:310] 
	I0924 01:12:43.152294   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:12:43.152411   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:12:43.152424   61989 kubeadm.go:310] 
	I0924 01:12:43.152565   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:12:43.152653   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:12:43.152718   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:12:43.152794   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:12:43.152802   61989 kubeadm.go:310] 
	I0924 01:12:43.153600   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:12:43.153699   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:12:43.153757   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 01:12:43.153808   61989 kubeadm.go:394] duration metric: took 7m57.944266289s to StartCluster
	I0924 01:12:43.153845   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:12:43.153894   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:12:43.199866   61989 cri.go:89] found id: ""
	I0924 01:12:43.199896   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.199908   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:12:43.199916   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:12:43.199975   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:12:43.235387   61989 cri.go:89] found id: ""
	I0924 01:12:43.235420   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.235432   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:12:43.235441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:12:43.235513   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:12:43.271255   61989 cri.go:89] found id: ""
	I0924 01:12:43.271290   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.271303   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:12:43.271312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:12:43.271380   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:12:43.305842   61989 cri.go:89] found id: ""
	I0924 01:12:43.305870   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.305882   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:12:43.305891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:12:43.305952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:12:43.341956   61989 cri.go:89] found id: ""
	I0924 01:12:43.341983   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.342005   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:12:43.342013   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:12:43.342093   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:12:43.376362   61989 cri.go:89] found id: ""
	I0924 01:12:43.376399   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.376421   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:12:43.376431   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:12:43.376487   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:12:43.409351   61989 cri.go:89] found id: ""
	I0924 01:12:43.409378   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.409387   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:12:43.409392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:12:43.409459   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:12:43.442446   61989 cri.go:89] found id: ""
	I0924 01:12:43.442479   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.442487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:12:43.442497   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:12:43.442507   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:12:43.498980   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:12:43.499020   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:12:43.520090   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:12:43.520120   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:12:43.612212   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:12:43.612242   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:12:43.612255   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:12:43.727355   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:12:43.727395   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 01:12:43.770163   61989 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 01:12:43.770217   61989 out.go:270] * 
	W0924 01:12:43.770282   61989 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.770297   61989 out.go:270] * 
	W0924 01:12:43.771298   61989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:12:43.775708   61989 out.go:201] 
	W0924 01:12:43.777139   61989 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.777186   61989 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 01:12:43.777214   61989 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 01:12:43.779580   61989 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.389304480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140679389276787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c42f0851-da99-467d-8d70-c00dfd1aa014 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.392310373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dce769d-2743-4b3e-b501-e4dca30129b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.392606032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dce769d-2743-4b3e-b501-e4dca30129b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.392944663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dce769d-2743-4b3e-b501-e4dca30129b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.428639053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a332197f-aa94-41aa-a617-6b87ab4eb9d4 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.428758958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a332197f-aa94-41aa-a617-6b87ab4eb9d4 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.430023499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76156c0f-484e-4d7a-aecf-8e9e7d5e8bb5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.430405821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140679430383766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76156c0f-484e-4d7a-aecf-8e9e7d5e8bb5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.431112959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82334278-02bd-4340-ba9f-1d87d9669450 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.431167057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82334278-02bd-4340-ba9f-1d87d9669450 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.431355260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82334278-02bd-4340-ba9f-1d87d9669450 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.470606010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=553ea4a9-7f11-438a-b3c9-14cffb6c9856 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.470678724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=553ea4a9-7f11-438a-b3c9-14cffb6c9856 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.471735924Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=400ef639-46ef-4495-9b8d-998094d93474 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.472256557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140679472232927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=400ef639-46ef-4495-9b8d-998094d93474 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.472891991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=706ce9d3-00bd-47f5-8900-fd090472382c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.472958023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=706ce9d3-00bd-47f5-8900-fd090472382c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.473182614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=706ce9d3-00bd-47f5-8900-fd090472382c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.506394653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2739f004-4d9e-4006-9501-171e8fe4f6a2 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.506479791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2739f004-4d9e-4006-9501-171e8fe4f6a2 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.507872698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be482274-edb6-4c66-8897-4effb428ecc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.508441856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140679508411119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be482274-edb6-4c66-8897-4effb428ecc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.509049184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5861edef-3207-4be3-aa8d-e07005c0a764 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.509112115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5861edef-3207-4be3-aa8d-e07005c0a764 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:17:59 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:17:59.509307120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5861edef-3207-4be3-aa8d-e07005c0a764 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b621e1c0feb5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   f77a2b5b8dc99       storage-provisioner
	05d6e2df9cf95       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b12edb12d460b       busybox
	ddbd1006bd609       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   86c947d9cd97c       coredns-7c65d6cfc9-xxdh2
	f31b7aed1cdf7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   1fb37f1fc655d       kube-proxy-nf8mp
	e76f05331da2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   f77a2b5b8dc99       storage-provisioner
	58d05b91989bd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   22c25c49da19a       kube-scheduler-default-k8s-diff-port-465341
	306da3fd311af       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   6d64cdd87d325       kube-apiserver-default-k8s-diff-port-465341
	55e01b5780ebe       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   c14e32efc528a       kube-controller-manager-default-k8s-diff-port-465341
	2c9f89868c713       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   d7121dd08f089       etcd-default-k8s-diff-port-465341
	
	
	==> coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54002 - 53945 "HINFO IN 8184409097673576607.808292174949133715. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008897981s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-465341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-465341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=default-k8s-diff-port-465341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_57_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:57:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-465341
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 01:17:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 01:15:09 +0000   Tue, 24 Sep 2024 00:57:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 01:15:09 +0000   Tue, 24 Sep 2024 00:57:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 01:15:09 +0000   Tue, 24 Sep 2024 00:57:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 01:15:09 +0000   Tue, 24 Sep 2024 01:04:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.186
	  Hostname:    default-k8s-diff-port-465341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c05df9f007d4c048ac491600582d36b
	  System UUID:                8c05df9f-007d-4c04-8ac4-91600582d36b
	  Boot ID:                    b433b690-8283-4013-993b-3f29777e81d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-xxdh2                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-default-k8s-diff-port-465341                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-465341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-465341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-nf8mp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-465341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-6867b74b74-jtx6r                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-465341 event: Registered Node default-k8s-diff-port-465341 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-465341 event: Registered Node default-k8s-diff-port-465341 in Controller
	
	
	==> dmesg <==
	[Sep24 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051499] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037024] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep24 01:04] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.905767] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.545343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.580014] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.071407] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077513] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.177433] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.151348] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.313267] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.122634] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +1.935499] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.070061] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.517159] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.402509] systemd-fstab-generator[1561]: Ignoring "noauto" option for root device
	[  +1.351350] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.268876] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] <==
	{"level":"info","ts":"2024-09-24T01:04:27.056730Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:04:27.057048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T01:04:27.057703Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.186:2379"}
	{"level":"warn","ts":"2024-09-24T01:04:43.607454Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.350842ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5844707089249008802 > lease_revoke:<id:511c92218831531e>","response":"size:29"}
	{"level":"info","ts":"2024-09-24T01:04:44.191514Z","caller":"traceutil/trace.go:171","msg":"trace[1308302914] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"310.418098ms","start":"2024-09-24T01:04:43.881066Z","end":"2024-09-24T01:04:44.191484Z","steps":["trace[1308302914] 'process raft request'  (duration: 310.280769ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:04:44.192997Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T01:04:43.881047Z","time spent":"310.639561ms","remote":"127.0.0.1:52722","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4563,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" mod_revision:472 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" value_size:4485 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" > >"}
	{"level":"warn","ts":"2024-09-24T01:04:45.003503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"768.905999ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T01:04:45.003583Z","caller":"traceutil/trace.go:171","msg":"trace[43365748] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:590; }","duration":"769.007868ms","start":"2024-09-24T01:04:44.234560Z","end":"2024-09-24T01:04:45.003568Z","steps":["trace[43365748] 'range keys from in-memory index tree'  (duration: 768.847234ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:04:45.003761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.081309ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5844707089249008808 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" mod_revision:590 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" value_size:4293 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-24T01:04:45.003919Z","caller":"traceutil/trace.go:171","msg":"trace[1813272735] linearizableReadLoop","detail":"{readStateIndex:624; appliedIndex:623; }","duration":"739.421144ms","start":"2024-09-24T01:04:44.264474Z","end":"2024-09-24T01:04:45.003895Z","steps":["trace[1813272735] 'read index received'  (duration: 356.967443ms)","trace[1813272735] 'applied index is now lower than readState.Index'  (duration: 382.452439ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T01:04:45.004013Z","caller":"traceutil/trace.go:171","msg":"trace[437900493] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"801.902561ms","start":"2024-09-24T01:04:44.202103Z","end":"2024-09-24T01:04:45.004005Z","steps":["trace[437900493] 'process raft request'  (duration: 419.321249ms)","trace[437900493] 'compare'  (duration: 381.982221ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T01:04:45.004058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"516.861791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T01:04:45.004127Z","caller":"traceutil/trace.go:171","msg":"trace[146088811] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:591; }","duration":"516.924696ms","start":"2024-09-24T01:04:44.487181Z","end":"2024-09-24T01:04:45.004105Z","steps":["trace[146088811] 'agreement among raft nodes before linearized reading'  (duration: 516.838013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:04:45.004161Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T01:04:44.487138Z","time spent":"517.012755ms","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-24T01:04:45.004167Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T01:04:44.202082Z","time spent":"801.974951ms","remote":"127.0.0.1:52722","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4371,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" mod_revision:590 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" value_size:4293 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" > >"}
	{"level":"warn","ts":"2024-09-24T01:04:45.004397Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"739.917393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" ","response":"range_response_count:1 size:4386"}
	{"level":"info","ts":"2024-09-24T01:04:45.004447Z","caller":"traceutil/trace.go:171","msg":"trace[364055663] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341; range_end:; response_count:1; response_revision:591; }","duration":"739.965069ms","start":"2024-09-24T01:04:44.264469Z","end":"2024-09-24T01:04:45.004434Z","steps":["trace[364055663] 'agreement among raft nodes before linearized reading'  (duration: 739.849087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:04:45.004490Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T01:04:44.264435Z","time spent":"740.045168ms","remote":"127.0.0.1:52722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4410,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-465341\" "}
	{"level":"info","ts":"2024-09-24T01:05:04.460321Z","caller":"traceutil/trace.go:171","msg":"trace[71207225] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"264.304475ms","start":"2024-09-24T01:05:04.195998Z","end":"2024-09-24T01:05:04.460302Z","steps":["trace[71207225] 'process raft request'  (duration: 264.17232ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T01:05:04.460914Z","caller":"traceutil/trace.go:171","msg":"trace[1745977174] linearizableReadLoop","detail":"{readStateIndex:651; appliedIndex:651; }","duration":"226.723459ms","start":"2024-09-24T01:05:04.234174Z","end":"2024-09-24T01:05:04.460898Z","steps":["trace[1745977174] 'read index received'  (duration: 226.717597ms)","trace[1745977174] 'applied index is now lower than readState.Index'  (duration: 4.78µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T01:05:04.461007Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.816224ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T01:05:04.461032Z","caller":"traceutil/trace.go:171","msg":"trace[679521091] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:614; }","duration":"226.8557ms","start":"2024-09-24T01:05:04.234169Z","end":"2024-09-24T01:05:04.461024Z","steps":["trace[679521091] 'agreement among raft nodes before linearized reading'  (duration: 226.800549ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T01:14:27.088119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":843}
	{"level":"info","ts":"2024-09-24T01:14:27.098988Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":843,"took":"10.447754ms","hash":900529936,"current-db-size-bytes":2703360,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2703360,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-24T01:14:27.099074Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":900529936,"revision":843,"compact-revision":-1}
	
	
	==> kernel <==
	 01:17:59 up 14 min,  0 users,  load average: 0.12, 0.15, 0.11
	Linux default-k8s-diff-port-465341 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] <==
	W0924 01:14:29.352496       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:14:29.352565       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:14:29.353703       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:14:29.353820       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:15:29.354205       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:15:29.354270       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 01:15:29.354311       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:15:29.354341       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:15:29.355740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:15:29.355888       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:17:29.356675       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 01:17:29.356676       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:17:29.357128       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 01:17:29.357128       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:17:29.358342       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:17:29.358376       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] <==
	E0924 01:12:33.968914       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:12:34.431018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:13:03.975257       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:13:04.438571       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:13:33.981273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:13:34.446075       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:14:03.987488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:14:04.453830       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:14:33.994227       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:14:34.462001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:15:04.001359       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:15:04.469020       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:15:09.825964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-465341"
	E0924 01:15:34.007568       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:15:34.476130       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:15:41.827879       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="241.677µs"
	I0924 01:15:54.824173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="111.02µs"
	E0924 01:16:04.013543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:16:04.485344       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:16:34.019643       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:16:34.492694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:17:04.025389       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:17:04.500342       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:17:34.031314       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:17:34.508123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 01:04:29.420543       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 01:04:29.429430       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.186"]
	E0924 01:04:29.429635       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 01:04:29.473443       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 01:04:29.473488       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 01:04:29.473512       1 server_linux.go:169] "Using iptables Proxier"
	I0924 01:04:29.475745       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 01:04:29.476194       1 server.go:483] "Version info" version="v1.31.1"
	I0924 01:04:29.476219       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:04:29.477696       1 config.go:199] "Starting service config controller"
	I0924 01:04:29.477736       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 01:04:29.477759       1 config.go:105] "Starting endpoint slice config controller"
	I0924 01:04:29.477795       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 01:04:29.478336       1 config.go:328] "Starting node config controller"
	I0924 01:04:29.478358       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 01:04:29.578303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 01:04:29.578415       1 shared_informer.go:320] Caches are synced for service config
	I0924 01:04:29.578703       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] <==
	I0924 01:04:26.452076       1 serving.go:386] Generated self-signed cert in-memory
	W0924 01:04:28.303437       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 01:04:28.303649       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 01:04:28.303715       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 01:04:28.303744       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 01:04:28.401171       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 01:04:28.401280       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:04:28.407036       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 01:04:28.407176       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 01:04:28.408155       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 01:04:28.407232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 01:04:28.509662       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 01:16:47 default-k8s-diff-port-465341 kubelet[928]: E0924 01:16:47.810488     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:16:53 default-k8s-diff-port-465341 kubelet[928]: E0924 01:16:53.965056     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140613964400525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:16:53 default-k8s-diff-port-465341 kubelet[928]: E0924 01:16:53.965102     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140613964400525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:01 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:01.810542     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:17:03 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:03.967332     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140623966988133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:03 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:03.967797     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140623966988133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:13 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:13.969511     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140633969196362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:13 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:13.969539     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140633969196362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:16 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:16.810082     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:17:23 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:23.824106     928 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 01:17:23 default-k8s-diff-port-465341 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 01:17:23 default-k8s-diff-port-465341 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 01:17:23 default-k8s-diff-port-465341 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 01:17:23 default-k8s-diff-port-465341 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 01:17:23 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:23.971742     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140643971460001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:23 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:23.971821     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140643971460001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:29 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:29.810816     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:17:33 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:33.973312     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140653972911459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:33 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:33.973691     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140653972911459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:40 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:40.810344     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:17:43 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:43.979204     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140663975981701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:43 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:43.979262     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140663975981701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:51 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:51.810049     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:17:53 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:53.981082     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140673980607785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:53 default-k8s-diff-port-465341 kubelet[928]: E0924 01:17:53.981132     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140673980607785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] <==
	I0924 01:05:00.141302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 01:05:00.151479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 01:05:00.152326       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 01:05:00.165372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 01:05:00.165630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-465341_53a896ee-5b4c-4683-8f2e-a9fa6b1638d4!
	I0924 01:05:00.166965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58543f7e-6980-4184-8e2e-1690eb4b49fa", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-465341_53a896ee-5b4c-4683-8f2e-a9fa6b1638d4 became leader
	I0924 01:05:00.266450       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-465341_53a896ee-5b4c-4683-8f2e-a9fa6b1638d4!
	
	
	==> storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] <==
	I0924 01:04:29.231639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0924 01:04:59.234591       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jtx6r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 describe pod metrics-server-6867b74b74-jtx6r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-465341 describe pod metrics-server-6867b74b74-jtx6r: exit status 1 (65.70415ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jtx6r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-465341 describe pod metrics-server-6867b74b74-jtx6r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0924 01:10:01.435428   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:10:43.332659   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-650507 -n embed-certs-650507
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-24 01:18:11.533394463 +0000 UTC m=+6032.964476902
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-650507 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-650507 logs -n 25: (2.211617959s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-075175                              | stopped-upgrade-075175       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-619300                           | kubernetes-upgrade-619300    | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:00:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:00:40.983605   61989 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:00:40.983716   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983722   61989 out.go:358] Setting ErrFile to fd 2...
	I0924 01:00:40.983728   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983918   61989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:00:40.984500   61989 out.go:352] Setting JSON to false
	I0924 01:00:40.985412   61989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6185,"bootTime":1727133456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:00:40.985513   61989 start.go:139] virtualization: kvm guest
	I0924 01:00:40.987848   61989 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:00:40.989366   61989 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:00:40.989467   61989 notify.go:220] Checking for updates...
	I0924 01:00:40.992462   61989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:00:40.994144   61989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:00:40.995782   61989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:00:40.997503   61989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:00:40.999038   61989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:00:41.000959   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:00:41.001315   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.001388   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.017304   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0924 01:00:41.017751   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.018320   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.018355   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.018708   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.018964   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.021075   61989 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:00:41.022764   61989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:00:41.023156   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.023204   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.038764   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0924 01:00:41.039238   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.039828   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.039856   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.040272   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.040569   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.078622   61989 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:00:41.079930   61989 start.go:297] selected driver: kvm2
	I0924 01:00:41.079945   61989 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.080076   61989 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:00:41.080841   61989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.080927   61989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:00:41.096851   61989 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:00:41.097306   61989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:00:41.097345   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:00:41.097410   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:00:41.097465   61989 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.097610   61989 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.099797   61989 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 01:00:39.376584   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:41.101644   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:00:41.101691   61989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 01:00:41.101704   61989 cache.go:56] Caching tarball of preloaded images
	I0924 01:00:41.101801   61989 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:00:41.101816   61989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 01:00:41.101922   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:00:41.102126   61989 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:00:45.456606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:48.528618   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:54.608639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:57.680645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:03.760641   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:06.832676   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:12.912635   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:15.984629   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:22.064669   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:25.136609   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:31.216643   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:34.288667   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:40.368636   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:43.440700   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:49.520634   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:52.592658   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:58.672637   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:01.744679   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:07.824597   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:10.896693   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:16.976656   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:20.048675   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:26.128638   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:29.200595   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:35.280645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:38.352665   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:44.432606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:47.504721   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:53.584645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:56.656617   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:02.736686   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:05.808671   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:11.888586   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:14.960688   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:21.040639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:24.112705   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:30.192631   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:33.264655   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:36.269218   61323 start.go:364] duration metric: took 4m25.932369998s to acquireMachinesLock for "embed-certs-650507"
	I0924 01:03:36.269290   61323 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:36.269298   61323 fix.go:54] fixHost starting: 
	I0924 01:03:36.269661   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:36.269714   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:36.285429   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0924 01:03:36.285943   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:36.286516   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:03:36.286557   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:36.286885   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:36.287078   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:36.287213   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:03:36.288895   61323 fix.go:112] recreateIfNeeded on embed-certs-650507: state=Stopped err=<nil>
	I0924 01:03:36.288917   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	W0924 01:03:36.289113   61323 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:36.291435   61323 out.go:177] * Restarting existing kvm2 VM for "embed-certs-650507" ...
	I0924 01:03:36.266390   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:36.266435   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.266788   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:03:36.266816   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.267022   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:03:36.269105   61070 machine.go:96] duration metric: took 4m37.426687547s to provisionDockerMachine
	I0924 01:03:36.269142   61070 fix.go:56] duration metric: took 4m37.448766856s for fixHost
	I0924 01:03:36.269148   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 4m37.448847609s
	W0924 01:03:36.269167   61070 start.go:714] error starting host: provision: host is not running
	W0924 01:03:36.269264   61070 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 01:03:36.269274   61070 start.go:729] Will try again in 5 seconds ...
	I0924 01:03:36.293006   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Start
	I0924 01:03:36.293199   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring networks are active...
	I0924 01:03:36.294032   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network default is active
	I0924 01:03:36.294359   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network mk-embed-certs-650507 is active
	I0924 01:03:36.294718   61323 main.go:141] libmachine: (embed-certs-650507) Getting domain xml...
	I0924 01:03:36.295407   61323 main.go:141] libmachine: (embed-certs-650507) Creating domain...
	I0924 01:03:37.516049   61323 main.go:141] libmachine: (embed-certs-650507) Waiting to get IP...
	I0924 01:03:37.516959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.517374   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.517443   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.517352   62594 retry.go:31] will retry after 278.072635ms: waiting for machine to come up
	I0924 01:03:37.796796   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.797276   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.797301   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.797242   62594 retry.go:31] will retry after 387.413297ms: waiting for machine to come up
	I0924 01:03:38.185869   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.186239   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.186258   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.186193   62594 retry.go:31] will retry after 363.798568ms: waiting for machine to come up
	I0924 01:03:38.551772   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.552181   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.552221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.552122   62594 retry.go:31] will retry after 392.798012ms: waiting for machine to come up
	I0924 01:03:38.946523   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.947069   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.947097   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.947018   62594 retry.go:31] will retry after 541.413772ms: waiting for machine to come up
	I0924 01:03:39.489873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:39.490278   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:39.490307   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:39.490226   62594 retry.go:31] will retry after 804.62107ms: waiting for machine to come up
	I0924 01:03:41.271024   61070 start.go:360] acquireMachinesLock for no-preload-674057: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:03:40.296290   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:40.296775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:40.296806   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:40.296726   62594 retry.go:31] will retry after 882.018637ms: waiting for machine to come up
	I0924 01:03:41.180799   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:41.181242   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:41.181263   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:41.181197   62594 retry.go:31] will retry after 961.194045ms: waiting for machine to come up
	I0924 01:03:42.143878   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:42.144354   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:42.144379   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:42.144270   62594 retry.go:31] will retry after 1.647837023s: waiting for machine to come up
	I0924 01:03:43.793458   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:43.793892   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:43.793933   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:43.793873   62594 retry.go:31] will retry after 1.751902059s: waiting for machine to come up
	I0924 01:03:45.547905   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:45.548356   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:45.548388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:45.548313   62594 retry.go:31] will retry after 2.380106471s: waiting for machine to come up
	I0924 01:03:47.931021   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:47.931513   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:47.931537   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:47.931456   62594 retry.go:31] will retry after 2.395516641s: waiting for machine to come up
	I0924 01:03:50.328214   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:50.328766   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:50.328791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:50.328729   62594 retry.go:31] will retry after 4.41219579s: waiting for machine to come up
	I0924 01:03:54.745159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745572   61323 main.go:141] libmachine: (embed-certs-650507) Found IP for machine: 192.168.39.104
	I0924 01:03:54.745606   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has current primary IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745615   61323 main.go:141] libmachine: (embed-certs-650507) Reserving static IP address...
	I0924 01:03:54.746020   61323 main.go:141] libmachine: (embed-certs-650507) Reserved static IP address: 192.168.39.104
	I0924 01:03:54.746042   61323 main.go:141] libmachine: (embed-certs-650507) Waiting for SSH to be available...
	I0924 01:03:54.746067   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.746134   61323 main.go:141] libmachine: (embed-certs-650507) DBG | skip adding static IP to network mk-embed-certs-650507 - found existing host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"}
	I0924 01:03:54.746159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Getting to WaitForSSH function...
	I0924 01:03:54.748464   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.748871   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.748906   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.749083   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH client type: external
	I0924 01:03:54.749118   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa (-rw-------)
	I0924 01:03:54.749153   61323 main.go:141] libmachine: (embed-certs-650507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:03:54.749165   61323 main.go:141] libmachine: (embed-certs-650507) DBG | About to run SSH command:
	I0924 01:03:54.749177   61323 main.go:141] libmachine: (embed-certs-650507) DBG | exit 0
	I0924 01:03:54.872532   61323 main.go:141] libmachine: (embed-certs-650507) DBG | SSH cmd err, output: <nil>: 
	I0924 01:03:54.872869   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetConfigRaw
	I0924 01:03:54.873480   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:54.876545   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.876922   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.876953   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.877204   61323 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/config.json ...
	I0924 01:03:54.877443   61323 machine.go:93] provisionDockerMachine start ...
	I0924 01:03:54.877467   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:54.877683   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.879873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880200   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.880221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880375   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.880546   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880681   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880866   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.881002   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.881194   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.881207   61323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:03:54.984605   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:03:54.984636   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.984922   61323 buildroot.go:166] provisioning hostname "embed-certs-650507"
	I0924 01:03:54.984948   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.985185   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.988284   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988699   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.988725   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988857   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.989069   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989344   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989529   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.989731   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.989899   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.989913   61323 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650507 && echo "embed-certs-650507" | sudo tee /etc/hostname
	I0924 01:03:55.106214   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650507
	
	I0924 01:03:55.106273   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.109000   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109310   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.109334   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109498   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.109646   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109989   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.110123   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.110303   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.110318   61323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:03:55.220699   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:55.220738   61323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:03:55.220755   61323 buildroot.go:174] setting up certificates
	I0924 01:03:55.220763   61323 provision.go:84] configureAuth start
	I0924 01:03:55.220771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:55.221112   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.224166   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224603   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.224634   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.226847   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227167   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.227194   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227308   61323 provision.go:143] copyHostCerts
	I0924 01:03:55.227386   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:03:55.227409   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:03:55.227490   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:03:55.227641   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:03:55.227653   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:03:55.227695   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:03:55.227781   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:03:55.227791   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:03:55.227826   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:03:55.227909   61323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650507 san=[127.0.0.1 192.168.39.104 embed-certs-650507 localhost minikube]
	I0924 01:03:55.917061   61699 start.go:364] duration metric: took 3m46.693519233s to acquireMachinesLock for "default-k8s-diff-port-465341"
	I0924 01:03:55.917135   61699 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:55.917144   61699 fix.go:54] fixHost starting: 
	I0924 01:03:55.917553   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:55.917606   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:55.937566   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0924 01:03:55.937971   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:55.938529   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:03:55.938556   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:55.938923   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:55.939182   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:03:55.939365   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:03:55.941155   61699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-465341: state=Stopped err=<nil>
	I0924 01:03:55.941197   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	W0924 01:03:55.941417   61699 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:55.943640   61699 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-465341" ...
	I0924 01:03:55.309866   61323 provision.go:177] copyRemoteCerts
	I0924 01:03:55.309928   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:03:55.309955   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.312946   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313365   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.313388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313638   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.313889   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.314062   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.314206   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.394427   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:03:55.420595   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 01:03:55.444377   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:03:55.467261   61323 provision.go:87] duration metric: took 246.485242ms to configureAuth
	I0924 01:03:55.467302   61323 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:03:55.467483   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:03:55.467552   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.470146   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470539   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.470572   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470719   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.470961   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471101   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471299   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.471450   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.471653   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.471676   61323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:03:55.688189   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:03:55.688218   61323 machine.go:96] duration metric: took 810.761675ms to provisionDockerMachine
	I0924 01:03:55.688230   61323 start.go:293] postStartSetup for "embed-certs-650507" (driver="kvm2")
	I0924 01:03:55.688244   61323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:03:55.688266   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.688659   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:03:55.688690   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.691375   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691761   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.691791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691881   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.692105   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.692309   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.692453   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.775412   61323 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:03:55.779423   61323 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:03:55.779448   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:03:55.779536   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:03:55.779629   61323 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:03:55.779742   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:03:55.788717   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:03:55.811673   61323 start.go:296] duration metric: took 123.428914ms for postStartSetup
	I0924 01:03:55.811717   61323 fix.go:56] duration metric: took 19.542419045s for fixHost
	I0924 01:03:55.811743   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.814745   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815034   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.815062   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815247   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.815449   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815851   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.816012   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.816168   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.816178   61323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:03:55.916845   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139835.894204557
	
	I0924 01:03:55.916883   61323 fix.go:216] guest clock: 1727139835.894204557
	I0924 01:03:55.916896   61323 fix.go:229] Guest: 2024-09-24 01:03:55.894204557 +0000 UTC Remote: 2024-09-24 01:03:55.811721448 +0000 UTC m=+285.612741728 (delta=82.483109ms)
	I0924 01:03:55.916935   61323 fix.go:200] guest clock delta is within tolerance: 82.483109ms
	I0924 01:03:55.916945   61323 start.go:83] releasing machines lock for "embed-certs-650507", held for 19.6476761s
	I0924 01:03:55.916990   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.917314   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.920105   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920550   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.920583   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920832   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921327   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921510   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921578   61323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:03:55.921634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.921747   61323 ssh_runner.go:195] Run: cat /version.json
	I0924 01:03:55.921771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.924238   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924430   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924717   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924741   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924792   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924953   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925061   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925153   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925277   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925360   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925439   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925582   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.925626   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:56.005229   61323 ssh_runner.go:195] Run: systemctl --version
	I0924 01:03:56.046189   61323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:03:56.187701   61323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:03:56.193313   61323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:03:56.193379   61323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:03:56.209278   61323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:03:56.209298   61323 start.go:495] detecting cgroup driver to use...
	I0924 01:03:56.209363   61323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:03:56.226995   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:03:56.241102   61323 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:03:56.241160   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:03:56.255002   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:03:56.269805   61323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:03:56.387382   61323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:03:56.545138   61323 docker.go:233] disabling docker service ...
	I0924 01:03:56.545220   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:03:56.559017   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:03:56.571939   61323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:03:56.694139   61323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:03:56.811253   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:03:56.825480   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:03:56.842777   61323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:03:56.842830   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.852387   61323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:03:56.852447   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.862702   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.872790   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.882864   61323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:03:56.893029   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.903314   61323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.923491   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.933424   61323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:03:56.944496   61323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:03:56.944561   61323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:03:56.957077   61323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:03:56.968602   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:03:57.080955   61323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:03:57.179826   61323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:03:57.179900   61323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:03:57.184652   61323 start.go:563] Will wait 60s for crictl version
	I0924 01:03:57.184716   61323 ssh_runner.go:195] Run: which crictl
	I0924 01:03:57.190300   61323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:03:57.239310   61323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:03:57.239371   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.266833   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.301876   61323 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:03:55.945290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Start
	I0924 01:03:55.945498   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring networks are active...
	I0924 01:03:55.946346   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network default is active
	I0924 01:03:55.946726   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network mk-default-k8s-diff-port-465341 is active
	I0924 01:03:55.947152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Getting domain xml...
	I0924 01:03:55.947872   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Creating domain...
	I0924 01:03:57.236194   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting to get IP...
	I0924 01:03:57.237037   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237445   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237497   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.237413   62713 retry.go:31] will retry after 286.244795ms: waiting for machine to come up
	I0924 01:03:57.525009   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525595   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525621   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.525548   62713 retry.go:31] will retry after 273.807213ms: waiting for machine to come up
	I0924 01:03:57.801217   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801734   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801756   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.801701   62713 retry.go:31] will retry after 371.291567ms: waiting for machine to come up
	I0924 01:03:58.174283   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174746   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174781   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.174692   62713 retry.go:31] will retry after 595.157579ms: waiting for machine to come up
	I0924 01:03:58.771428   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771900   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771925   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.771862   62713 retry.go:31] will retry after 734.305784ms: waiting for machine to come up
	I0924 01:03:57.303135   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:57.306110   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306598   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:57.306624   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306783   61323 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 01:03:57.310829   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:03:57.322605   61323 kubeadm.go:883] updating cluster {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:03:57.322715   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:03:57.322761   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:03:57.358040   61323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:03:57.358104   61323 ssh_runner.go:195] Run: which lz4
	I0924 01:03:57.361948   61323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:03:57.365911   61323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:03:57.365950   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:03:58.651636   61323 crio.go:462] duration metric: took 1.289721413s to copy over tarball
	I0924 01:03:58.651708   61323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:03:59.507803   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508308   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:59.508237   62713 retry.go:31] will retry after 875.394603ms: waiting for machine to come up
	I0924 01:04:00.385279   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385713   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385748   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:00.385655   62713 retry.go:31] will retry after 885.980109ms: waiting for machine to come up
	I0924 01:04:01.273114   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273545   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273590   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:01.273535   62713 retry.go:31] will retry after 935.451975ms: waiting for machine to come up
	I0924 01:04:02.210920   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211423   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:02.211331   62713 retry.go:31] will retry after 1.254573538s: waiting for machine to come up
	I0924 01:04:03.467027   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467593   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467626   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:03.467488   62713 retry.go:31] will retry after 2.044247818s: waiting for machine to come up
	I0924 01:04:00.805580   61323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153837858s)
	I0924 01:04:00.805608   61323 crio.go:469] duration metric: took 2.153947595s to extract the tarball
	I0924 01:04:00.805617   61323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:00.846074   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:00.895803   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:00.895833   61323 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:00.895842   61323 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I0924 01:04:00.895966   61323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-650507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:00.896041   61323 ssh_runner.go:195] Run: crio config
	I0924 01:04:00.941958   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:00.941985   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:00.941998   61323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:00.942029   61323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650507 NodeName:embed-certs-650507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:00.942202   61323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650507"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:00.942292   61323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:00.952748   61323 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:00.952853   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:00.962984   61323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0924 01:04:00.980030   61323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:01.001571   61323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0924 01:04:01.018760   61323 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:01.022770   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:01.034816   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:01.157888   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:01.175883   61323 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507 for IP: 192.168.39.104
	I0924 01:04:01.175911   61323 certs.go:194] generating shared ca certs ...
	I0924 01:04:01.175937   61323 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:01.176134   61323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:01.176198   61323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:01.176211   61323 certs.go:256] generating profile certs ...
	I0924 01:04:01.176324   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/client.key
	I0924 01:04:01.176441   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key.86682f38
	I0924 01:04:01.176515   61323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key
	I0924 01:04:01.176640   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:01.176669   61323 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:01.176678   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:01.176713   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:01.176749   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:01.176778   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:01.176987   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:01.177918   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:01.221682   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:01.266005   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:01.299467   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:01.324598   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 01:04:01.349526   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:01.385589   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:01.409713   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:04:01.433745   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:01.457493   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:01.482197   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:01.505740   61323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:01.524029   61323 ssh_runner.go:195] Run: openssl version
	I0924 01:04:01.530147   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:01.541117   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545823   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545894   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.551638   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:01.562373   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:01.573502   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578561   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578634   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.584415   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:01.595312   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:01.606503   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611530   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611602   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.618484   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:01.629332   61323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:01.634238   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:01.640266   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:01.646306   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:01.652510   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:01.658237   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:01.663962   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:01.669998   61323 kubeadm.go:392] StartCluster: {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:01.670105   61323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:01.670162   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.706478   61323 cri.go:89] found id: ""
	I0924 01:04:01.706555   61323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:01.717106   61323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:01.717127   61323 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:01.717188   61323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:01.729966   61323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:01.730947   61323 kubeconfig.go:125] found "embed-certs-650507" server: "https://192.168.39.104:8443"
	I0924 01:04:01.732933   61323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:01.745538   61323 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.104
	I0924 01:04:01.745581   61323 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:01.745594   61323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:01.745649   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.783313   61323 cri.go:89] found id: ""
	I0924 01:04:01.783423   61323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:01.801432   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:01.811282   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:01.811308   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:01.811371   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:01.820717   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:01.820780   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:01.830289   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:01.839383   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:01.839449   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:01.848920   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.857986   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:01.858045   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.867465   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:01.876598   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:01.876680   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:01.886122   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:01.896245   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:02.004839   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.077983   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073104284s)
	I0924 01:04:03.078020   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.295254   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.369968   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.458283   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:03.458383   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:03.958648   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.459039   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.958614   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.994450   61323 api_server.go:72] duration metric: took 1.536167442s to wait for apiserver process to appear ...
	I0924 01:04:04.994485   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:04.994530   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:04.995139   61323 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0924 01:04:05.513732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514247   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514275   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:05.514201   62713 retry.go:31] will retry after 2.814717647s: waiting for machine to come up
	I0924 01:04:08.331550   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331964   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:08.331932   62713 retry.go:31] will retry after 2.942261445s: waiting for machine to come up
	I0924 01:04:05.495090   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:07.946057   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:07.946116   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:07.946135   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.018665   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.018711   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.018729   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.027105   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.027144   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.494630   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.500471   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.500494   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.995055   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.017236   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:09.017272   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:09.494769   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.500285   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:04:09.507440   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:09.507470   61323 api_server.go:131] duration metric: took 4.512953508s to wait for apiserver health ...
	I0924 01:04:09.507478   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:09.507485   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:09.509661   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:09.511104   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:09.529080   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:09.567695   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:09.579425   61323 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:09.579470   61323 system_pods.go:61] "coredns-7c65d6cfc9-xgs6g" [b975196f-e9e6-4e30-a49b-8d3031f73a21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:09.579489   61323 system_pods.go:61] "etcd-embed-certs-650507" [c24d7e21-08a8-42bd-9def-1808d8a58e07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:09.579501   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f1de6ed5-a87f-4d1d-8feb-d0f80851b5b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:09.579509   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [d0d454bf-b9d3-4dcb-957c-f1329e4e9e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:09.579516   61323 system_pods.go:61] "kube-proxy-qd4lg" [f06c009f-3c62-4e54-82fd-ca468fb05bbc] Running
	I0924 01:04:09.579523   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [e4931370-821e-4289-9b2b-9b46d9f8394e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:09.579532   61323 system_pods.go:61] "metrics-server-6867b74b74-pc28v" [688d7bbe-9fee-450f-aecf-bbb3413a3633] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:09.579536   61323 system_pods.go:61] "storage-provisioner" [9e354a3c-e4f1-46e1-b5fb-de8243f41c29] Running
	I0924 01:04:09.579542   61323 system_pods.go:74] duration metric: took 11.824796ms to wait for pod list to return data ...
	I0924 01:04:09.579550   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:09.584175   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:09.584203   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:09.584214   61323 node_conditions.go:105] duration metric: took 4.659859ms to run NodePressure ...
	I0924 01:04:09.584230   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:09.847130   61323 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:09.851985   61323 kubeadm.go:739] kubelet initialised
	I0924 01:04:09.852008   61323 kubeadm.go:740] duration metric: took 4.853319ms waiting for restarted kubelet to initialise ...
	I0924 01:04:09.852015   61323 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:09.857149   61323 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:11.275680   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276135   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276166   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:11.276102   62713 retry.go:31] will retry after 3.599939746s: waiting for machine to come up
	I0924 01:04:11.865712   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:13.864779   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:13.864801   61323 pod_ready.go:82] duration metric: took 4.007625744s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:13.864809   61323 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.233175   61989 start.go:364] duration metric: took 3m35.131018203s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 01:04:16.233254   61989 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:16.233262   61989 fix.go:54] fixHost starting: 
	I0924 01:04:16.233733   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:16.233787   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:16.255690   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0924 01:04:16.256135   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:16.256729   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:04:16.256763   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:16.257122   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:16.257365   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:16.257560   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 01:04:16.259055   61989 fix.go:112] recreateIfNeeded on old-k8s-version-171598: state=Stopped err=<nil>
	I0924 01:04:16.259091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	W0924 01:04:16.259266   61989 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:16.261327   61989 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	I0924 01:04:14.879977   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880533   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has current primary IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880563   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Found IP for machine: 192.168.61.186
	I0924 01:04:14.880596   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserving static IP address...
	I0924 01:04:14.881148   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.881171   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | skip adding static IP to network mk-default-k8s-diff-port-465341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"}
	I0924 01:04:14.881188   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserved static IP address: 192.168.61.186
	I0924 01:04:14.881216   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for SSH to be available...
	I0924 01:04:14.881229   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Getting to WaitForSSH function...
	I0924 01:04:14.883679   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884060   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.884083   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884214   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH client type: external
	I0924 01:04:14.884248   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa (-rw-------)
	I0924 01:04:14.884276   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:14.884287   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | About to run SSH command:
	I0924 01:04:14.884298   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | exit 0
	I0924 01:04:15.012764   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:15.013163   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetConfigRaw
	I0924 01:04:15.013983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.016664   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017173   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.017207   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017440   61699 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/config.json ...
	I0924 01:04:15.017668   61699 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:15.017687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.017915   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.020388   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.020816   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.020839   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.021074   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.021249   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021513   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021681   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.021850   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.022031   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.022041   61699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:15.132672   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:15.132706   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.132994   61699 buildroot.go:166] provisioning hostname "default-k8s-diff-port-465341"
	I0924 01:04:15.133025   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.133268   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.135929   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136371   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.136399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136578   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.136850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137008   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137193   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.137407   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.137589   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.137610   61699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-465341 && echo "default-k8s-diff-port-465341" | sudo tee /etc/hostname
	I0924 01:04:15.262142   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-465341
	
	I0924 01:04:15.262174   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.265359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265736   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.265761   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265962   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.266176   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266335   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266510   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.266705   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.266903   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.266926   61699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-465341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-465341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-465341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:15.385085   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:15.385122   61699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:15.385158   61699 buildroot.go:174] setting up certificates
	I0924 01:04:15.385174   61699 provision.go:84] configureAuth start
	I0924 01:04:15.385186   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.385556   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.388350   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388798   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.388828   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388985   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.391478   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391793   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.391823   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391952   61699 provision.go:143] copyHostCerts
	I0924 01:04:15.392016   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:15.392045   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:15.392115   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:15.392259   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:15.392272   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:15.392306   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:15.392406   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:15.392415   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:15.392440   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:15.392503   61699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-465341 san=[127.0.0.1 192.168.61.186 default-k8s-diff-port-465341 localhost minikube]
	I0924 01:04:15.572588   61699 provision.go:177] copyRemoteCerts
	I0924 01:04:15.572682   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:15.572718   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.575884   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.576401   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.576868   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.577099   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.577248   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:15.662231   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:15.686800   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 01:04:15.709860   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:04:15.738063   61699 provision.go:87] duration metric: took 352.876914ms to configureAuth
	I0924 01:04:15.738105   61699 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:15.738302   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:15.738420   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.741231   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741644   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.741693   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741835   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.742036   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742218   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.742526   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.742727   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.742754   61699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:15.986096   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:15.986128   61699 machine.go:96] duration metric: took 968.446778ms to provisionDockerMachine
	I0924 01:04:15.986143   61699 start.go:293] postStartSetup for "default-k8s-diff-port-465341" (driver="kvm2")
	I0924 01:04:15.986156   61699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:15.986183   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.986639   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:15.986674   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.989692   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990094   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.990124   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990407   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.990643   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.990826   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.990958   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.079174   61699 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:16.083139   61699 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:16.083168   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:16.083251   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:16.083363   61699 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:16.083486   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:16.094571   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:16.117327   61699 start.go:296] duration metric: took 131.16913ms for postStartSetup
	I0924 01:04:16.117364   61699 fix.go:56] duration metric: took 20.200222398s for fixHost
	I0924 01:04:16.117384   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.120507   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.120857   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.120899   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.121059   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.121325   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121511   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.121901   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:16.122100   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:16.122113   61699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:16.232986   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139856.205476339
	
	I0924 01:04:16.233013   61699 fix.go:216] guest clock: 1727139856.205476339
	I0924 01:04:16.233024   61699 fix.go:229] Guest: 2024-09-24 01:04:16.205476339 +0000 UTC Remote: 2024-09-24 01:04:16.117368802 +0000 UTC m=+247.038042336 (delta=88.107537ms)
	I0924 01:04:16.233086   61699 fix.go:200] guest clock delta is within tolerance: 88.107537ms
	I0924 01:04:16.233094   61699 start.go:83] releasing machines lock for "default-k8s-diff-port-465341", held for 20.315992151s
	I0924 01:04:16.233133   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.233491   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:16.236719   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237104   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.237134   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.237850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238019   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238116   61699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:16.238167   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.238227   61699 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:16.238260   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.241123   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241448   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241598   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241916   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.241982   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.242152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242225   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242351   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242479   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242543   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.242880   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.368841   61699 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:16.374990   61699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:16.521604   61699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:16.527198   61699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:16.527290   61699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:16.543251   61699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:16.543278   61699 start.go:495] detecting cgroup driver to use...
	I0924 01:04:16.543357   61699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:16.561775   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:16.576028   61699 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:16.576097   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:16.591757   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:16.607927   61699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:16.753944   61699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:16.917338   61699 docker.go:233] disabling docker service ...
	I0924 01:04:16.917401   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:16.935104   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:16.949717   61699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:17.088275   61699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:17.222093   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:17.236370   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:17.256277   61699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:17.256360   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.266516   61699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:17.266600   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.276647   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.288283   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.299232   61699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:17.311336   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.329416   61699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.351465   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.362248   61699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:17.372102   61699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:17.372154   61699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:17.392055   61699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:17.413641   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:17.541224   61699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:17.655205   61699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:17.655281   61699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:17.660096   61699 start.go:563] Will wait 60s for crictl version
	I0924 01:04:17.660163   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:04:17.663880   61699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:17.706878   61699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:17.706959   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.735377   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.766744   61699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:17.768253   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:17.771534   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.771952   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:17.771983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.772230   61699 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:17.776486   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:17.792599   61699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:17.792744   61699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:17.792813   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:17.831837   61699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:17.831929   61699 ssh_runner.go:195] Run: which lz4
	I0924 01:04:17.836193   61699 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:17.840562   61699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:17.840596   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:04:15.871512   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:15.871540   61323 pod_ready.go:82] duration metric: took 2.006723245s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:15.871552   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879872   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:17.879899   61323 pod_ready.go:82] duration metric: took 2.008337801s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879918   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888007   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.888041   61323 pod_ready.go:82] duration metric: took 2.008114424s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888056   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894805   61323 pod_ready.go:93] pod "kube-proxy-qd4lg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.894844   61323 pod_ready.go:82] duration metric: took 6.779022ms for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894862   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900353   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.900387   61323 pod_ready.go:82] duration metric: took 5.513733ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900401   61323 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.262929   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .Start
	I0924 01:04:16.263123   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 01:04:16.264062   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 01:04:16.264543   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 01:04:16.264954   61989 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 01:04:16.265899   61989 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 01:04:17.566157   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 01:04:17.567223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.567644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.567724   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.567625   62886 retry.go:31] will retry after 301.652575ms: waiting for machine to come up
	I0924 01:04:17.871163   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.871700   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.871729   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.871645   62886 retry.go:31] will retry after 337.632324ms: waiting for machine to come up
	I0924 01:04:18.211081   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.211954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.212013   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.211892   62886 retry.go:31] will retry after 431.70455ms: waiting for machine to come up
	I0924 01:04:18.645408   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.646017   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.646044   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.645958   62886 retry.go:31] will retry after 582.966569ms: waiting for machine to come up
	I0924 01:04:19.230457   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.230954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.230980   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.230897   62886 retry.go:31] will retry after 720.62326ms: waiting for machine to come up
	I0924 01:04:19.953023   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.953570   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.953603   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.953512   62886 retry.go:31] will retry after 688.597177ms: waiting for machine to come up
	I0924 01:04:20.644150   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:20.644636   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:20.644672   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:20.644578   62886 retry.go:31] will retry after 1.084671138s: waiting for machine to come up
	I0924 01:04:19.165501   61699 crio.go:462] duration metric: took 1.329329949s to copy over tarball
	I0924 01:04:19.165575   61699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:21.323478   61699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157877766s)
	I0924 01:04:21.323509   61699 crio.go:469] duration metric: took 2.157979404s to extract the tarball
	I0924 01:04:21.323516   61699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:21.360397   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:21.401282   61699 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:21.401309   61699 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:21.401319   61699 kubeadm.go:934] updating node { 192.168.61.186 8444 v1.31.1 crio true true} ...
	I0924 01:04:21.401441   61699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-465341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:21.401524   61699 ssh_runner.go:195] Run: crio config
	I0924 01:04:21.447706   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:21.447730   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:21.447741   61699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:21.447766   61699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.186 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-465341 NodeName:default-k8s-diff-port-465341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:21.447939   61699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-465341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:21.448022   61699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:21.457882   61699 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:21.457967   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:21.467329   61699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 01:04:21.483464   61699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:21.500880   61699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 01:04:21.517179   61699 ssh_runner.go:195] Run: grep 192.168.61.186	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:21.521032   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:21.532339   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:21.655583   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:21.671964   61699 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341 for IP: 192.168.61.186
	I0924 01:04:21.672019   61699 certs.go:194] generating shared ca certs ...
	I0924 01:04:21.672044   61699 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:21.672273   61699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:21.672390   61699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:21.672409   61699 certs.go:256] generating profile certs ...
	I0924 01:04:21.672536   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.key
	I0924 01:04:21.672629   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key.b6f5ff18
	I0924 01:04:21.672696   61699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key
	I0924 01:04:21.672940   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:21.672987   61699 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:21.672999   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:21.673029   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:21.673060   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:21.673091   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:21.673133   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:21.673884   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:21.706165   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:21.735352   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:21.763358   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:21.786284   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 01:04:21.814844   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:21.839773   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:21.866549   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:21.889901   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:21.914875   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:21.939116   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:21.963264   61699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:21.980912   61699 ssh_runner.go:195] Run: openssl version
	I0924 01:04:21.986725   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:21.998128   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002832   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002903   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.008847   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:22.019274   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:22.030110   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035920   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035996   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.043505   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:22.057224   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:22.067596   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.071957   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.072029   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.077495   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:22.087627   61699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:22.092049   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:22.097908   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:22.103716   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:22.109871   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:22.116088   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:22.121760   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:22.127473   61699 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:22.127563   61699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:22.127613   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.167951   61699 cri.go:89] found id: ""
	I0924 01:04:22.168054   61699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:22.177878   61699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:22.177898   61699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:22.177949   61699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:22.187116   61699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:22.188577   61699 kubeconfig.go:125] found "default-k8s-diff-port-465341" server: "https://192.168.61.186:8444"
	I0924 01:04:22.191744   61699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:22.200936   61699 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.186
	I0924 01:04:22.200967   61699 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:22.200979   61699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:22.201039   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.247804   61699 cri.go:89] found id: ""
	I0924 01:04:22.247888   61699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:22.263853   61699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:22.273254   61699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:22.273271   61699 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:22.273327   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 01:04:22.281724   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:22.281790   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:22.290823   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 01:04:22.299422   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:22.299482   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:22.308961   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.317922   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:22.318010   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.326980   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 01:04:22.335995   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:22.336084   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:22.345002   61699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:22.354302   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:22.462157   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.380163   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.610795   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.679134   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.747119   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:23.747191   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:21.909834   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:24.104163   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:21.730823   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:21.731385   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:21.731411   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:21.731351   62886 retry.go:31] will retry after 1.051424847s: waiting for machine to come up
	I0924 01:04:22.784644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:22.785194   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:22.785223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:22.785138   62886 retry.go:31] will retry after 1.750498954s: waiting for machine to come up
	I0924 01:04:24.537680   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:24.538085   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:24.538109   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:24.538039   62886 retry.go:31] will retry after 2.015183238s: waiting for machine to come up
	I0924 01:04:24.247859   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:24.748076   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.248220   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.747481   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.774137   61699 api_server.go:72] duration metric: took 2.027016323s to wait for apiserver process to appear ...
	I0924 01:04:25.774167   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:25.774194   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:25.774901   61699 api_server.go:269] stopped: https://192.168.61.186:8444/healthz: Get "https://192.168.61.186:8444/healthz": dial tcp 192.168.61.186:8444: connect: connection refused
	I0924 01:04:26.275226   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.290581   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.290621   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.290637   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.321353   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.321386   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.775068   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.779873   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:28.779896   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:26.408349   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:28.409816   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:26.555221   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:26.555674   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:26.555695   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:26.555634   62886 retry.go:31] will retry after 2.568414115s: waiting for machine to come up
	I0924 01:04:29.127625   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:29.128130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:29.128149   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:29.128108   62886 retry.go:31] will retry after 2.207252231s: waiting for machine to come up
	I0924 01:04:29.275326   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.284304   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.284360   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:29.774975   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.779470   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.779503   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.275137   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.279256   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.279287   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.774874   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.779081   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.779110   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.275163   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.279417   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:31.279446   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.775022   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.780092   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:04:31.787643   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:31.787672   61699 api_server.go:131] duration metric: took 6.013498176s to wait for apiserver health ...
	I0924 01:04:31.787680   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:31.787686   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:31.789733   61699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:31.791140   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:31.801441   61699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:31.819890   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:31.828128   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:31.828160   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:31.828168   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:31.828177   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:31.828186   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:31.828191   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:04:31.828196   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:31.828200   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:31.828203   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:04:31.828209   61699 system_pods.go:74] duration metric: took 8.300337ms to wait for pod list to return data ...
	I0924 01:04:31.828215   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:31.831528   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:31.831550   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:31.831561   61699 node_conditions.go:105] duration metric: took 3.341719ms to run NodePressure ...
	I0924 01:04:31.831576   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:32.101590   61699 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105656   61699 kubeadm.go:739] kubelet initialised
	I0924 01:04:32.105679   61699 kubeadm.go:740] duration metric: took 4.062709ms waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105691   61699 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:32.110237   61699 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.115057   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115090   61699 pod_ready.go:82] duration metric: took 4.825694ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.115102   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115110   61699 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.119506   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119534   61699 pod_ready.go:82] duration metric: took 4.415876ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.119546   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119558   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.124199   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124248   61699 pod_ready.go:82] duration metric: took 4.660764ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.124266   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124285   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.223553   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223596   61699 pod_ready.go:82] duration metric: took 99.284751ms for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.223606   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223613   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.622500   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622527   61699 pod_ready.go:82] duration metric: took 398.907418ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.622538   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622545   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.023370   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023430   61699 pod_ready.go:82] duration metric: took 400.874003ms for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.023458   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023472   61699 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.422810   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422841   61699 pod_ready.go:82] duration metric: took 399.35051ms for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.422851   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422859   61699 pod_ready.go:39] duration metric: took 1.317159668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:33.422874   61699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:04:33.434449   61699 ops.go:34] apiserver oom_adj: -16
	I0924 01:04:33.434473   61699 kubeadm.go:597] duration metric: took 11.256568213s to restartPrimaryControlPlane
	I0924 01:04:33.434481   61699 kubeadm.go:394] duration metric: took 11.307014166s to StartCluster
	I0924 01:04:33.434501   61699 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.434571   61699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:33.436172   61699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.436515   61699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:04:33.436732   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:33.436686   61699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:04:33.436809   61699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436815   61699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436830   61699 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-465341"
	I0924 01:04:33.436832   61699 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436864   61699 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.436877   61699 addons.go:243] addon metrics-server should already be in state true
	I0924 01:04:33.436908   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	W0924 01:04:33.436842   61699 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:04:33.436935   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.436831   61699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-465341"
	I0924 01:04:33.437322   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437370   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437377   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437412   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437458   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437483   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.438259   61699 out.go:177] * Verifying Kubernetes components...
	I0924 01:04:33.439923   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:33.453108   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0924 01:04:33.453545   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I0924 01:04:33.453608   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.453916   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.454125   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454152   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454461   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454486   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454494   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.454806   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.455065   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455111   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.455360   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455404   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.456716   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0924 01:04:33.457163   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.457688   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.457727   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.458031   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.458242   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.461814   61699 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.461835   61699 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:04:33.461864   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.462230   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.462273   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.471783   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0924 01:04:33.472043   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0924 01:04:33.472300   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472550   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472858   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.472875   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.472994   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.473003   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.473234   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473366   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473413   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.473503   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.475140   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.475553   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.477287   61699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:04:33.477293   61699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:33.478708   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:04:33.478720   61699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:04:33.478737   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478836   61699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.478863   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:04:33.478889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478971   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0924 01:04:33.479636   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.480029   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.480041   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.480396   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.482306   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.482343   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.483280   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483373   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483769   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483873   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483892   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483958   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484111   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484236   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484255   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484413   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.484472   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484738   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484866   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.519981   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0924 01:04:33.520440   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.520996   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.521028   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.521497   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.521701   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.523331   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.523576   61699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.523591   61699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:04:33.523625   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.526668   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527211   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.527244   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527471   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.527702   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.527889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.528059   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.645903   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:33.663805   61699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:33.749720   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.751631   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:04:33.751649   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:04:33.755330   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.812231   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:04:33.812257   61699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:04:33.847216   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:33.847240   61699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:04:33.932057   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:34.781871   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026510893s)
	I0924 01:04:34.781939   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.781950   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.781887   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032127769s)
	I0924 01:04:34.782009   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782023   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782293   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782309   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782318   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782326   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782361   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782369   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782375   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782389   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782404   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782629   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782643   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782645   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782673   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782683   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.790740   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.790757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.790990   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.791010   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.791013   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.871488   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871516   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.871809   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.871826   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.871834   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871841   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.872103   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.872125   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.872117   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.872136   61699 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-465341"
	I0924 01:04:34.874133   61699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:04:30.907606   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:33.406280   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:31.337368   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:31.338025   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:31.338128   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:31.338011   62886 retry.go:31] will retry after 4.137847727s: waiting for machine to come up
	I0924 01:04:35.478410   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.478991   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.479016   61989 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 01:04:35.479029   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 01:04:35.479586   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.479607   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 01:04:35.479626   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | skip adding static IP to network mk-old-k8s-version-171598 - found existing host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"}
	I0924 01:04:35.479643   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 01:04:35.479659   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 01:04:35.482028   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482377   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.482419   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 01:04:35.482550   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 01:04:35.482585   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:35.482600   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 01:04:35.482614   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 01:04:35.613364   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:35.613847   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 01:04:35.614543   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:35.617366   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.617742   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.617774   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.618068   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:04:35.618260   61989 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:35.618279   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:35.618489   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.621130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621472   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.621497   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621722   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.621914   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622354   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.622558   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.622749   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.622760   61989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:35.736637   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:35.736661   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.736943   61989 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 01:04:35.736973   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.737151   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.739921   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740304   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.740362   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740502   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.740678   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740851   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.741218   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.741409   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.741423   61989 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 01:04:35.866963   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 01:04:35.866994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.870342   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.870860   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.870893   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.871145   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.871406   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871638   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871850   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.872050   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.872253   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.872276   61989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:36.717274   61070 start.go:364] duration metric: took 55.446152288s to acquireMachinesLock for "no-preload-674057"
	I0924 01:04:36.717335   61070 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:36.717344   61070 fix.go:54] fixHost starting: 
	I0924 01:04:36.717781   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:36.717821   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:36.739062   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0924 01:04:36.739602   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:36.740307   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:04:36.740366   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:36.740767   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:36.741058   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:36.741223   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:04:36.743313   61070 fix.go:112] recreateIfNeeded on no-preload-674057: state=Stopped err=<nil>
	I0924 01:04:36.743339   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	W0924 01:04:36.743512   61070 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:36.745694   61070 out.go:177] * Restarting existing kvm2 VM for "no-preload-674057" ...
	I0924 01:04:35.998933   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:35.998962   61989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:35.998983   61989 buildroot.go:174] setting up certificates
	I0924 01:04:35.998994   61989 provision.go:84] configureAuth start
	I0924 01:04:35.999005   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.999359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.002499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003027   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.003052   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003167   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.005508   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005773   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.005796   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005909   61989 provision.go:143] copyHostCerts
	I0924 01:04:36.005967   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:36.005986   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:36.006037   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:36.006129   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:36.006137   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:36.006156   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:36.006209   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:36.006216   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:36.006237   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:36.006310   61989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 01:04:36.084609   61989 provision.go:177] copyRemoteCerts
	I0924 01:04:36.084671   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:36.084698   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.087740   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088046   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.088075   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088278   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.088523   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.088716   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.088854   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.178597   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:36.202768   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:04:36.225933   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:36.250014   61989 provision.go:87] duration metric: took 251.005829ms to configureAuth
	I0924 01:04:36.250046   61989 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:36.250369   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:04:36.250453   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.253290   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.253912   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.253943   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.254242   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.254474   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254650   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254764   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.254958   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.255124   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.255138   61989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:36.472324   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:36.472381   61989 machine.go:96] duration metric: took 854.106776ms to provisionDockerMachine
	I0924 01:04:36.472401   61989 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 01:04:36.472419   61989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:36.472451   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.472814   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:36.472849   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.475567   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.475941   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.475969   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.476125   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.476403   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.476614   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.476831   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.562688   61989 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:36.566476   61989 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:36.566501   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:36.566561   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:36.566635   61989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:36.566724   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:36.576132   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:36.599696   61989 start.go:296] duration metric: took 127.276787ms for postStartSetup
	I0924 01:04:36.599738   61989 fix.go:56] duration metric: took 20.366477202s for fixHost
	I0924 01:04:36.599763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.603462   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.603836   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.603867   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.604057   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.604500   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604721   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604878   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.605041   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.605285   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.605303   61989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:36.717061   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139876.688490589
	
	I0924 01:04:36.717091   61989 fix.go:216] guest clock: 1727139876.688490589
	I0924 01:04:36.717102   61989 fix.go:229] Guest: 2024-09-24 01:04:36.688490589 +0000 UTC Remote: 2024-09-24 01:04:36.599742488 +0000 UTC m=+235.652611441 (delta=88.748101ms)
	I0924 01:04:36.717157   61989 fix.go:200] guest clock delta is within tolerance: 88.748101ms
	I0924 01:04:36.717165   61989 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 20.483937438s
	I0924 01:04:36.717199   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.717499   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.720466   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.720959   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.720986   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.721189   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721965   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.722073   61989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:36.722118   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.722187   61989 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:36.722215   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.725171   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725384   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725669   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.725694   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725858   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.725970   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.726016   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.726065   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726249   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.726254   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.726494   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726513   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.726657   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.727049   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.845385   61989 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:36.853307   61989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:37.001850   61989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:37.009873   61989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:37.009948   61989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:37.032269   61989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:37.032299   61989 start.go:495] detecting cgroup driver to use...
	I0924 01:04:37.032403   61989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:37.056250   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:37.072827   61989 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:37.072903   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:37.090639   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:37.107525   61989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:37.235495   61989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:37.410971   61989 docker.go:233] disabling docker service ...
	I0924 01:04:37.411034   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:37.427815   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:37.444121   61989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:37.568933   61989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:37.700008   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:37.715529   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:37.736908   61989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 01:04:37.736980   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.748540   61989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:37.748590   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.759301   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.771008   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.782080   61989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:37.793756   61989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:37.803444   61989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:37.803525   61989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:37.818012   61989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:37.829019   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:37.978885   61989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:38.086263   61989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:38.086353   61989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:38.093479   61989 start.go:563] Will wait 60s for crictl version
	I0924 01:04:38.093573   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:38.097486   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:38.138781   61989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:38.138872   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.166832   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.199764   61989 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 01:04:36.747491   61070 main.go:141] libmachine: (no-preload-674057) Calling .Start
	I0924 01:04:36.747705   61070 main.go:141] libmachine: (no-preload-674057) Ensuring networks are active...
	I0924 01:04:36.748694   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network default is active
	I0924 01:04:36.749079   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network mk-no-preload-674057 is active
	I0924 01:04:36.749656   61070 main.go:141] libmachine: (no-preload-674057) Getting domain xml...
	I0924 01:04:36.750535   61070 main.go:141] libmachine: (no-preload-674057) Creating domain...
	I0924 01:04:38.122450   61070 main.go:141] libmachine: (no-preload-674057) Waiting to get IP...
	I0924 01:04:38.123578   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.124107   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.124173   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.124079   63121 retry.go:31] will retry after 227.552582ms: waiting for machine to come up
	I0924 01:04:38.353724   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.354145   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.354169   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.354102   63121 retry.go:31] will retry after 322.483933ms: waiting for machine to come up
	I0924 01:04:38.678600   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.679091   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.679120   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.679041   63121 retry.go:31] will retry after 301.71366ms: waiting for machine to come up
	I0924 01:04:34.875511   61699 addons.go:510] duration metric: took 1.43884954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:04:35.671396   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:38.169131   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:35.907681   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.408396   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.201359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:38.204699   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205122   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:38.205152   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205408   61989 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:38.209456   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:38.222128   61989 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:38.222254   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:04:38.222300   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:38.276802   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:38.276864   61989 ssh_runner.go:195] Run: which lz4
	I0924 01:04:38.280989   61989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:38.285108   61989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:38.285138   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 01:04:39.903777   61989 crio.go:462] duration metric: took 1.62282331s to copy over tarball
	I0924 01:04:39.903900   61989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:38.982586   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.983239   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.983283   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.983219   63121 retry.go:31] will retry after 402.217062ms: waiting for machine to come up
	I0924 01:04:39.386903   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:39.387550   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:39.387578   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:39.387483   63121 retry.go:31] will retry after 734.565994ms: waiting for machine to come up
	I0924 01:04:40.123444   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.123910   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.123940   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.123870   63121 retry.go:31] will retry after 704.281941ms: waiting for machine to come up
	I0924 01:04:40.829666   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.830217   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.830275   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.830209   63121 retry.go:31] will retry after 1.068502434s: waiting for machine to come up
	I0924 01:04:41.900192   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:41.900739   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:41.900765   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:41.900691   63121 retry.go:31] will retry after 1.087234201s: waiting for machine to come up
	I0924 01:04:42.989622   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:42.990089   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:42.990117   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:42.990036   63121 retry.go:31] will retry after 1.269273138s: waiting for machine to come up
	I0924 01:04:39.168613   61699 node_ready.go:49] node "default-k8s-diff-port-465341" has status "Ready":"True"
	I0924 01:04:39.168638   61699 node_ready.go:38] duration metric: took 5.504799687s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:39.168650   61699 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:39.175830   61699 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182016   61699 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.182040   61699 pod_ready.go:82] duration metric: took 6.182193ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182052   61699 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188162   61699 pod_ready.go:93] pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.188191   61699 pod_ready.go:82] duration metric: took 6.130794ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188201   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196197   61699 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.196225   61699 pod_ready.go:82] duration metric: took 8.016123ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196238   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703747   61699 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.703776   61699 pod_ready.go:82] duration metric: took 1.507528182s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703791   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771262   61699 pod_ready.go:93] pod "kube-proxy-nf8mp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.771293   61699 pod_ready.go:82] duration metric: took 67.494606ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771307   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:42.778933   61699 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:40.908876   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:43.409650   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:42.944929   61989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040984911s)
	I0924 01:04:42.944969   61989 crio.go:469] duration metric: took 3.041152253s to extract the tarball
	I0924 01:04:42.944981   61989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:42.988315   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:43.036011   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:43.036045   61989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:43.036151   61989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.036194   61989 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 01:04:43.036211   61989 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.036281   61989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.036301   61989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.036344   61989 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.036310   61989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.036577   61989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038440   61989 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.038458   61989 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 01:04:43.038482   61989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.038502   61989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.038554   61989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.038588   61989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.038600   61989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038816   61989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.306768   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.309660   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 01:04:43.312684   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.314551   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.317719   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.326063   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.378736   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.405508   61989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 01:04:43.405585   61989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.405648   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.452908   61989 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 01:04:43.452954   61989 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 01:04:43.453006   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471293   61989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 01:04:43.471341   61989 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 01:04:43.471347   61989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.471370   61989 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.471297   61989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 01:04:43.471406   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471421   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471423   61989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.471462   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.494687   61989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 01:04:43.494735   61989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.494782   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508206   61989 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 01:04:43.508253   61989 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.508278   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.508298   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508363   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.508419   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.508451   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.508487   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.508547   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.645995   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.646039   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.646098   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.646152   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.646261   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.646337   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.646413   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817326   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.817416   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.817381   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.817508   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817449   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.817597   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.817686   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.972782   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.972792   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 01:04:43.972869   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 01:04:43.972838   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 01:04:43.972928   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 01:04:43.972944   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 01:04:43.973027   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 01:04:44.008191   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 01:04:44.220628   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:44.364297   61989 cache_images.go:92] duration metric: took 1.328227964s to LoadCachedImages
	W0924 01:04:44.364505   61989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0924 01:04:44.364539   61989 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 01:04:44.364681   61989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:44.364824   61989 ssh_runner.go:195] Run: crio config
	I0924 01:04:44.423360   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:04:44.423382   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:44.423393   61989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:44.423412   61989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:04:44.423593   61989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:44.423671   61989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:04:44.434069   61989 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:44.434143   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:44.443807   61989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 01:04:44.463473   61989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:44.480449   61989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 01:04:44.498520   61989 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:44.503034   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:44.516699   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:44.643090   61989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:44.660194   61989 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 01:04:44.660216   61989 certs.go:194] generating shared ca certs ...
	I0924 01:04:44.660234   61989 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:44.660454   61989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:44.660542   61989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:44.660559   61989 certs.go:256] generating profile certs ...
	I0924 01:04:44.660682   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 01:04:44.660755   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 01:04:44.660816   61989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 01:04:44.660976   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:44.661014   61989 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:44.661026   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:44.661071   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:44.661104   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:44.661133   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:44.661211   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:44.662130   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:44.710279   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:44.736824   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:44.773120   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:44.801137   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:04:44.844946   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:44.880871   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:44.908630   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:44.947148   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:44.971925   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:45.000519   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:45.034167   61989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:45.054932   61989 ssh_runner.go:195] Run: openssl version
	I0924 01:04:45.062733   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:45.076993   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082104   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082175   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.088219   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:45.099211   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:45.111178   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116551   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116624   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.122353   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:45.133490   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:45.144123   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150437   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150498   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.157127   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:45.168217   61989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:45.172865   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:45.179177   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:45.184987   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:45.190927   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:45.197134   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:45.203170   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:45.209550   61989 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:45.209721   61989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:45.209778   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.247564   61989 cri.go:89] found id: ""
	I0924 01:04:45.247635   61989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:45.258171   61989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:45.258195   61989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:45.258269   61989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:45.268247   61989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:45.269656   61989 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:45.270486   61989 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171598" cluster setting kubeconfig missing "old-k8s-version-171598" context setting]
	I0924 01:04:45.271918   61989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:45.277260   61989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:45.287239   61989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.3
	I0924 01:04:45.287271   61989 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:45.287281   61989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:45.287325   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.327991   61989 cri.go:89] found id: ""
	I0924 01:04:45.328071   61989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:45.344693   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:45.354414   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:45.354439   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:45.354499   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:45.363765   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:45.363838   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:45.373569   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:45.382401   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:45.382464   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:45.392710   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.402855   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:45.402919   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.413651   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:45.423818   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:45.423873   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:45.434138   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:45.444119   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:45.582409   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:44.261681   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:44.262330   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:44.262360   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:44.262274   63121 retry.go:31] will retry after 1.755704993s: waiting for machine to come up
	I0924 01:04:46.019761   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:46.020213   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:46.020242   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:46.020155   63121 retry.go:31] will retry after 2.038509067s: waiting for machine to come up
	I0924 01:04:48.060649   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:48.061170   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:48.061201   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:48.061122   63121 retry.go:31] will retry after 2.834284151s: waiting for machine to come up
	I0924 01:04:45.021172   61699 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:45.021200   61699 pod_ready.go:82] duration metric: took 4.249884358s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:45.021213   61699 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:47.028860   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:45.908530   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:48.407714   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:46.245754   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.511218   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.608877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.722521   61989 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:46.722607   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.222945   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.723437   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.223704   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.723517   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.223744   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.722691   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.222927   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.723331   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.897541   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:50.898047   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:50.898093   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:50.898018   63121 retry.go:31] will retry after 4.166792416s: waiting for machine to come up
	I0924 01:04:49.530215   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.027812   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:50.907425   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.907568   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:54.908623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:51.223525   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.722715   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.223281   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.723378   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.222798   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.722883   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.223279   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.723155   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.222994   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.723628   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.068642   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069305   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has current primary IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069330   61070 main.go:141] libmachine: (no-preload-674057) Found IP for machine: 192.168.50.161
	I0924 01:04:55.069339   61070 main.go:141] libmachine: (no-preload-674057) Reserving static IP address...
	I0924 01:04:55.070035   61070 main.go:141] libmachine: (no-preload-674057) Reserved static IP address: 192.168.50.161
	I0924 01:04:55.070065   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.070073   61070 main.go:141] libmachine: (no-preload-674057) Waiting for SSH to be available...
	I0924 01:04:55.070090   61070 main.go:141] libmachine: (no-preload-674057) DBG | skip adding static IP to network mk-no-preload-674057 - found existing host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"}
	I0924 01:04:55.070095   61070 main.go:141] libmachine: (no-preload-674057) DBG | Getting to WaitForSSH function...
	I0924 01:04:55.072715   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073106   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.073140   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073351   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH client type: external
	I0924 01:04:55.073379   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa (-rw-------)
	I0924 01:04:55.073405   61070 main.go:141] libmachine: (no-preload-674057) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:55.073444   61070 main.go:141] libmachine: (no-preload-674057) DBG | About to run SSH command:
	I0924 01:04:55.073462   61070 main.go:141] libmachine: (no-preload-674057) DBG | exit 0
	I0924 01:04:55.200585   61070 main.go:141] libmachine: (no-preload-674057) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:55.200980   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetConfigRaw
	I0924 01:04:55.201650   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.204919   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205340   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.205360   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205638   61070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 01:04:55.205881   61070 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:55.205903   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:55.206124   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.208572   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209012   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.209037   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209218   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.209499   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209693   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209832   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.210010   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.210249   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.210263   61070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:55.317027   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:55.317067   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317403   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:04:55.317441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317700   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.320886   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321301   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.321330   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321443   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.321643   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.321853   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.322010   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.322169   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.322343   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.322360   61070 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-674057 && echo "no-preload-674057" | sudo tee /etc/hostname
	I0924 01:04:55.439098   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-674057
	
	I0924 01:04:55.439134   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.441909   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442212   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.442256   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442430   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.442667   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.442890   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.443078   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.443301   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.443460   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.443474   61070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-674057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-674057/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-674057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:55.558172   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:55.558204   61070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:55.558225   61070 buildroot.go:174] setting up certificates
	I0924 01:04:55.558236   61070 provision.go:84] configureAuth start
	I0924 01:04:55.558248   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.558574   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.561503   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.561891   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.561917   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.562089   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.564426   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564800   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.564825   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564958   61070 provision.go:143] copyHostCerts
	I0924 01:04:55.565009   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:55.565018   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:55.565074   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:55.565167   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:55.565175   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:55.565194   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:55.565253   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:55.565263   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:55.565285   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:55.565372   61070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.no-preload-674057 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-674057]
	I0924 01:04:55.649690   61070 provision.go:177] copyRemoteCerts
	I0924 01:04:55.649750   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:55.649774   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.652790   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653249   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.653278   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653567   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.653772   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.653936   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.654059   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:55.738522   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:55.764045   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:04:55.788225   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:55.811207   61070 provision.go:87] duration metric: took 252.958643ms to configureAuth
	I0924 01:04:55.811233   61070 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:55.811415   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:55.811503   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.814921   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815366   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.815400   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815597   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.815826   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816039   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816212   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.816496   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.816740   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.816756   61070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:56.045600   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:56.045632   61070 machine.go:96] duration metric: took 839.736907ms to provisionDockerMachine
	I0924 01:04:56.045646   61070 start.go:293] postStartSetup for "no-preload-674057" (driver="kvm2")
	I0924 01:04:56.045660   61070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:56.045679   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.045997   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:56.046027   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.049081   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049522   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.049559   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049743   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.049960   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.050105   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.050245   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.136652   61070 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:56.140894   61070 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:56.140920   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:56.140987   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:56.141071   61070 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:56.141161   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:56.151170   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:56.179268   61070 start.go:296] duration metric: took 133.605527ms for postStartSetup
	I0924 01:04:56.179318   61070 fix.go:56] duration metric: took 19.461975001s for fixHost
	I0924 01:04:56.179344   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.182567   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.182902   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.182927   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.183091   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.183320   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183562   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183720   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.183865   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:56.184036   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:56.184045   61070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:56.289079   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139896.261476318
	
	I0924 01:04:56.289113   61070 fix.go:216] guest clock: 1727139896.261476318
	I0924 01:04:56.289121   61070 fix.go:229] Guest: 2024-09-24 01:04:56.261476318 +0000 UTC Remote: 2024-09-24 01:04:56.17932382 +0000 UTC m=+357.500342999 (delta=82.152498ms)
	I0924 01:04:56.289141   61070 fix.go:200] guest clock delta is within tolerance: 82.152498ms
	I0924 01:04:56.289156   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 19.57184993s
	I0924 01:04:56.289175   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.289441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:56.292799   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293122   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.293148   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293327   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293832   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293990   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.294073   61070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:56.294108   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.294271   61070 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:56.294299   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.296962   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297113   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297300   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297325   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297473   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297504   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297526   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297665   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297737   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297858   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297926   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.297968   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.298044   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.298139   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.373014   61070 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:56.412487   61070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:56.558755   61070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:56.565187   61070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:56.565245   61070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:56.582073   61070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:56.582102   61070 start.go:495] detecting cgroup driver to use...
	I0924 01:04:56.582167   61070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:56.597553   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:56.612515   61070 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:56.612564   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:56.627596   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:56.641619   61070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:56.762636   61070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:56.917742   61070 docker.go:233] disabling docker service ...
	I0924 01:04:56.917821   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:56.934585   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:56.949194   61070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:57.085465   61070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:57.230529   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:57.245369   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:57.265137   61070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:57.265196   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.276878   61070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:57.276936   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.288934   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.300690   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.312392   61070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:57.324491   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.335619   61070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.352868   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.363280   61070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:57.372811   61070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:57.372866   61070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:57.385797   61070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:57.395936   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:57.532086   61070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:57.628275   61070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:57.628370   61070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:57.633679   61070 start.go:563] Will wait 60s for crictl version
	I0924 01:04:57.633761   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:57.637574   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:57.679667   61070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:57.679756   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.707710   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.738651   61070 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:57.740120   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:57.743379   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.743783   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:57.743814   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.744048   61070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:57.748516   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:57.762723   61070 kubeadm.go:883] updating cluster {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:57.762864   61070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:57.762906   61070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:57.798232   61070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:57.798260   61070 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:57.798334   61070 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.798357   61070 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.798377   61070 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:57.798340   61070 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.798397   61070 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.798381   61070 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.799819   61070 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.799826   61070 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.799840   61070 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799893   61070 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 01:04:57.799902   61070 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.799903   61070 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.027261   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.028437   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 01:04:58.051940   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.082860   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.088073   61070 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 01:04:58.088121   61070 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.088190   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.095081   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.098388   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.152389   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.190893   61070 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 01:04:58.190920   61070 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 01:04:58.190934   61070 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.190944   61070 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.190984   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191029   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.190988   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191080   61070 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 01:04:58.191109   61070 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.191134   61070 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 01:04:58.191144   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191157   61070 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.191185   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219642   61070 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 01:04:58.219689   61070 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.219703   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.219729   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219741   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.219745   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.250341   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.250394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.320188   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.320222   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.320308   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.320394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.383126   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.383327   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.453833   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.453918   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.453878   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.453923   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.499994   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.500027   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 01:04:58.500119   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.583372   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 01:04:58.583491   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:04:58.586213   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 01:04:58.586281   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.586325   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:04:58.586328   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 01:04:58.586405   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:04:58.616022   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 01:04:58.616061   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 01:04:58.616082   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.616118   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 01:04:58.616131   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:04:58.616180   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 01:04:58.616128   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.647507   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 01:04:58.647576   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 01:04:58.647620   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 01:04:58.647659   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:04:54.527399   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.028355   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.407381   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:59.908596   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:56.222908   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.722701   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.222762   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.722814   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.222671   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.722746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.222961   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.723335   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.223393   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.722739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.003431   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815541   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.199297236s)
	I0924 01:05:00.815566   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.167859705s)
	I0924 01:05:00.815579   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 01:05:00.815599   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 01:05:00.815619   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815625   61070 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812143064s)
	I0924 01:05:00.815674   61070 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 01:05:00.815687   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815710   61070 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815750   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:05:02.782328   61070 ssh_runner.go:235] Completed: which crictl: (1.966554191s)
	I0924 01:05:02.782392   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.966688239s)
	I0924 01:05:02.782421   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 01:05:02.782445   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782497   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782404   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:59.529167   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.531324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.028305   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:02.407051   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.475255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.222765   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.722729   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.223407   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.722799   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.223381   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.723427   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.223157   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.723069   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.223400   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.723739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.773493   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.990910382s)
	I0924 01:05:04.773540   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.99101415s)
	I0924 01:05:04.773560   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 01:05:04.773577   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:04.773584   61070 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:04.773615   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:08.061466   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.287832238s)
	I0924 01:05:08.061499   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 01:05:08.061510   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.287911454s)
	I0924 01:05:08.061595   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:08.061520   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:08.061690   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:06.029255   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.527617   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.907268   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.907464   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.223395   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.723345   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.222965   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.722795   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.222933   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.723687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.223526   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.723684   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.223275   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.723534   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.041517   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.979809714s)
	I0924 01:05:10.041549   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 01:05:10.041577   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.979956931s)
	I0924 01:05:10.041625   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 01:05:10.041582   61070 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041714   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041727   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005649   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.963906504s)
	I0924 01:05:12.005689   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 01:05:12.005696   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963951454s)
	I0924 01:05:12.005720   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 01:05:12.005727   61070 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005768   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.960728   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 01:05:12.960771   61070 cache_images.go:123] Successfully loaded all cached images
	I0924 01:05:12.960778   61070 cache_images.go:92] duration metric: took 15.162496206s to LoadCachedImages
	I0924 01:05:12.960791   61070 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.1 crio true true} ...
	I0924 01:05:12.960931   61070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-674057 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:05:12.961013   61070 ssh_runner.go:195] Run: crio config
	I0924 01:05:13.006511   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:13.006535   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:13.006551   61070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:05:13.006579   61070 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-674057 NodeName:no-preload-674057 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:05:13.006729   61070 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-674057"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:05:13.006799   61070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:05:13.017598   61070 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:05:13.017672   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:05:13.027414   61070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 01:05:13.044688   61070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:05:13.061646   61070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 01:05:13.079552   61070 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0924 01:05:13.083172   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:05:13.095232   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:05:13.207184   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:05:13.222851   61070 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057 for IP: 192.168.50.161
	I0924 01:05:13.222880   61070 certs.go:194] generating shared ca certs ...
	I0924 01:05:13.222901   61070 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:05:13.223084   61070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:05:13.223184   61070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:05:13.223195   61070 certs.go:256] generating profile certs ...
	I0924 01:05:13.223314   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.key
	I0924 01:05:13.223394   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key.8fa8fb95
	I0924 01:05:13.223445   61070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key
	I0924 01:05:13.223614   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:05:13.223654   61070 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:05:13.223710   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:05:13.223756   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:05:13.223785   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:05:13.223818   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:05:13.223862   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:05:13.224549   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:05:13.273224   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:05:13.311069   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:05:13.342314   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:05:13.369345   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:05:13.395466   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:05:13.424307   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:05:13.448531   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:05:13.472491   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:05:13.496060   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:05:13.521182   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:05:13.548194   61070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:05:13.566423   61070 ssh_runner.go:195] Run: openssl version
	I0924 01:05:13.572605   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:05:13.583991   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588705   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588771   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.594828   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:05:13.606168   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:05:13.617723   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622697   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622762   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.628486   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:05:13.639176   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:05:13.650161   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654546   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654625   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.660382   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:05:13.671487   61070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:05:13.676226   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:05:13.682591   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:05:13.688492   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:05:13.694726   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:05:13.700432   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:05:13.706080   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:05:13.712226   61070 kubeadm.go:392] StartCluster: {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:05:13.712323   61070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:05:13.712421   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:11.028779   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.527996   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:10.908227   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.408515   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:11.223272   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.723442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.223301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.723151   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.223174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.722780   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.222777   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.722987   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.223654   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.723449   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.757518   61070 cri.go:89] found id: ""
	I0924 01:05:13.757597   61070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:05:13.768318   61070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:05:13.768367   61070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:05:13.768416   61070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:05:13.778918   61070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:05:13.780385   61070 kubeconfig.go:125] found "no-preload-674057" server: "https://192.168.50.161:8443"
	I0924 01:05:13.783392   61070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:05:13.794016   61070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0924 01:05:13.794050   61070 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:05:13.794085   61070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:05:13.794150   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:13.833511   61070 cri.go:89] found id: ""
	I0924 01:05:13.833596   61070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:05:13.851608   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:05:13.861469   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:05:13.861510   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:05:13.861552   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:05:13.870700   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:05:13.870770   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:05:13.880613   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:05:13.890336   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:05:13.890404   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:05:13.900172   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.910408   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:05:13.910475   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.919980   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:05:13.929398   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:05:13.929495   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:05:13.938894   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:05:13.948749   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:14.056463   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.345268   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.288763261s)
	I0924 01:05:15.345317   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.555986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.626986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.697665   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:05:15.697761   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.198410   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.698860   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.715727   61070 api_server.go:72] duration metric: took 1.018058771s to wait for apiserver process to appear ...
	I0924 01:05:16.715756   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:05:16.715779   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:15.528157   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.528680   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:15.906930   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.907223   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:16.223623   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.723625   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.223541   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.722702   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.222919   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.722982   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.222978   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.723547   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.223112   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.723562   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.716809   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:21.716852   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:19.528769   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.028695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:20.406693   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.407036   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:24.906735   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:21.223058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.722680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.223693   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.722716   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.223387   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.722910   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.223608   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.723144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.223442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.723025   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.717768   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:26.717811   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:24.527568   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.527806   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.028455   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:27.406994   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.906590   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.723271   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.223163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.723283   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.723174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.222803   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.723029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.223679   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.723058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.718277   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:31.718317   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:31.028690   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:33.527675   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.906723   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:34.406306   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.223465   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.723438   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.223673   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.722674   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.223289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.723651   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.223014   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.723518   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.222860   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.723642   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.718676   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:36.718716   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.146737   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": read tcp 192.168.50.1:59880->192.168.50.161:8443: read: connection reset by peer
	I0924 01:05:37.215865   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.216506   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:37.716052   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.716731   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:38.216296   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:36.028537   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.032544   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.406928   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.407201   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.222680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.723015   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.222736   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.723185   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.223070   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.723237   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.223640   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.723622   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.222705   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.722909   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.217518   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:43.217557   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:40.527577   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:43.027715   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:40.906522   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:42.906906   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:44.907623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:41.223105   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.723166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.223286   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.723048   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.223278   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.723301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.222712   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.723191   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.223720   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.723044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:48.217915   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:48.217982   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:45.028780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.028883   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.406680   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:49.907776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:46.223270   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.722902   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:46.722980   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:46.781519   61989 cri.go:89] found id: ""
	I0924 01:05:46.781551   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.781565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:46.781574   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:46.781630   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:46.815990   61989 cri.go:89] found id: ""
	I0924 01:05:46.816021   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.816030   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:46.816035   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:46.816082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:46.848951   61989 cri.go:89] found id: ""
	I0924 01:05:46.848980   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.848989   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:46.848995   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:46.849062   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:46.880731   61989 cri.go:89] found id: ""
	I0924 01:05:46.880756   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.880764   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:46.880770   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:46.880832   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:46.915975   61989 cri.go:89] found id: ""
	I0924 01:05:46.916004   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.916014   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:46.916036   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:46.916105   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:46.954124   61989 cri.go:89] found id: ""
	I0924 01:05:46.954154   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.954162   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:46.954168   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:46.954233   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:46.990454   61989 cri.go:89] found id: ""
	I0924 01:05:46.990489   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.990498   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:46.990504   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:46.990573   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:47.024099   61989 cri.go:89] found id: ""
	I0924 01:05:47.024137   61989 logs.go:276] 0 containers: []
	W0924 01:05:47.024150   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:47.024161   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:47.024176   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:47.153050   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:47.153076   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:47.153109   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:47.223472   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:47.223511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:47.267699   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:47.267729   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:47.314741   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:47.314773   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:49.828972   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:49.842301   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:49.842378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:49.874632   61989 cri.go:89] found id: ""
	I0924 01:05:49.874659   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.874669   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:49.874676   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:49.874734   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:49.912500   61989 cri.go:89] found id: ""
	I0924 01:05:49.912524   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.912532   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:49.912543   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:49.912592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:49.947297   61989 cri.go:89] found id: ""
	I0924 01:05:49.947320   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.947328   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:49.947334   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:49.947395   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:49.983863   61989 cri.go:89] found id: ""
	I0924 01:05:49.983892   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.983905   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:49.983915   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:49.983977   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:50.022997   61989 cri.go:89] found id: ""
	I0924 01:05:50.023031   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.023044   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:50.023053   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:50.023109   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:50.057829   61989 cri.go:89] found id: ""
	I0924 01:05:50.057863   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.057875   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:50.057882   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:50.057929   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:50.114599   61989 cri.go:89] found id: ""
	I0924 01:05:50.114620   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.114628   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:50.114633   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:50.114677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:50.147294   61989 cri.go:89] found id: ""
	I0924 01:05:50.147326   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.147334   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:50.147345   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:50.147378   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:50.198362   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:50.198402   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:50.212381   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:50.212415   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:50.286216   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:50.286261   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:50.286279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:50.366794   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:50.366827   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:53.218617   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:53.218653   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:49.527980   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.027425   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.027780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:51.908078   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.406891   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.908167   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:52.922279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:52.922353   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:52.956677   61989 cri.go:89] found id: ""
	I0924 01:05:52.956708   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.956720   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:52.956727   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:52.956778   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:52.990933   61989 cri.go:89] found id: ""
	I0924 01:05:52.990956   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.990964   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:52.990970   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:52.991019   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:53.025729   61989 cri.go:89] found id: ""
	I0924 01:05:53.025758   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.025768   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:53.025778   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:53.025838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:53.060238   61989 cri.go:89] found id: ""
	I0924 01:05:53.060269   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.060279   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:53.060287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:53.060366   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:53.094166   61989 cri.go:89] found id: ""
	I0924 01:05:53.094200   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.094212   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:53.094220   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:53.094289   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:53.129857   61989 cri.go:89] found id: ""
	I0924 01:05:53.129884   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.129892   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:53.129898   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:53.129955   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:53.165857   61989 cri.go:89] found id: ""
	I0924 01:05:53.165890   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.165898   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:53.165909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:53.165970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:53.203884   61989 cri.go:89] found id: ""
	I0924 01:05:53.203909   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.203917   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:53.203926   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:53.203937   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:53.258001   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:53.258035   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:53.271584   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:53.271620   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:53.341791   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:53.341811   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:53.341824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:53.424126   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:53.424170   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:55.962067   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:55.977964   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:55.978042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:56.277329   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:05:56.277366   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:05:56.277385   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.302576   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.302628   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:56.715873   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.722458   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.722487   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.216714   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.224426   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:57.224474   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.715976   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.725067   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:05:57.734749   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:05:57.734782   61070 api_server.go:131] duration metric: took 41.019017744s to wait for apiserver health ...
	I0924 01:05:57.734793   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:57.734801   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:57.736798   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:05:57.738285   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:05:57.750654   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:05:57.778587   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:05:57.804858   61070 system_pods.go:59] 8 kube-system pods found
	I0924 01:05:57.804907   61070 system_pods.go:61] "coredns-7c65d6cfc9-kshwz" [4393c6ec-abd9-42ce-af67-9e8b768bd49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:05:57.804917   61070 system_pods.go:61] "etcd-no-preload-674057" [65cf3acb-8ffa-4f83-8ab9-86ddefc5d829] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:05:57.804932   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [7d26a065-faa1-4ba2-96b7-6c9b1ccb5386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:05:57.804940   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [7c5c6602-1749-4f34-bb63-08161baac6db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:05:57.804949   61070 system_pods.go:61] "kube-proxy-fgmwc" [a81419dc-54f5-4bdd-ac2d-f3f7c85b8f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 01:05:57.804955   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [d02c8d9a-1897-4506-8029-9608f11520de] Running
	I0924 01:05:57.804965   61070 system_pods.go:61] "metrics-server-6867b74b74-7gbnr" [6ffa0eb7-21d8-4741-9eae-ce7bb9604dec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:05:57.804975   61070 system_pods.go:61] "storage-provisioner" [a7f99914-8945-4614-afef-d553ea932edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 01:05:57.804984   61070 system_pods.go:74] duration metric: took 26.369156ms to wait for pod list to return data ...
	I0924 01:05:57.804996   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:05:57.809068   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:05:57.809103   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:05:57.809119   61070 node_conditions.go:105] duration metric: took 4.115654ms to run NodePressure ...
	I0924 01:05:57.809137   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:58.173276   61070 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178398   61070 kubeadm.go:739] kubelet initialised
	I0924 01:05:58.178422   61070 kubeadm.go:740] duration metric: took 5.118555ms waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178429   61070 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:05:58.183646   61070 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:05:56.029030   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.029256   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.407889   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.907744   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.014681   61989 cri.go:89] found id: ""
	I0924 01:05:56.014716   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.014728   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:56.014736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:56.014799   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:56.062547   61989 cri.go:89] found id: ""
	I0924 01:05:56.062576   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.062587   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:56.062606   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:56.062665   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:56.100938   61989 cri.go:89] found id: ""
	I0924 01:05:56.100960   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.100969   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:56.100974   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:56.101039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:56.137694   61989 cri.go:89] found id: ""
	I0924 01:05:56.137722   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.137737   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:56.137744   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:56.137803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:56.174876   61989 cri.go:89] found id: ""
	I0924 01:05:56.174911   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.174923   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:56.174931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:56.174990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:56.208870   61989 cri.go:89] found id: ""
	I0924 01:05:56.208895   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.208905   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:56.208913   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:56.208971   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:56.242476   61989 cri.go:89] found id: ""
	I0924 01:05:56.242508   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.242520   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:56.242528   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:56.242590   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:56.276185   61989 cri.go:89] found id: ""
	I0924 01:05:56.276214   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.276255   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:56.276267   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:56.276284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:56.332755   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:56.332792   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:56.346279   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:56.346312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:56.419725   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:56.419751   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:56.419766   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:56.500173   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:56.500208   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.083761   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:59.097184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:59.097247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:59.131734   61989 cri.go:89] found id: ""
	I0924 01:05:59.131764   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.131775   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:59.131782   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:59.131842   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:59.169402   61989 cri.go:89] found id: ""
	I0924 01:05:59.169429   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.169439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:59.169446   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:59.169521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:59.208235   61989 cri.go:89] found id: ""
	I0924 01:05:59.208260   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.208290   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:59.208298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:59.208372   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:59.242314   61989 cri.go:89] found id: ""
	I0924 01:05:59.242345   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.242358   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:59.242367   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:59.242433   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:59.281300   61989 cri.go:89] found id: ""
	I0924 01:05:59.281327   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.281337   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:59.281344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:59.281407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:59.315336   61989 cri.go:89] found id: ""
	I0924 01:05:59.315369   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.315377   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:59.315386   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:59.315445   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:59.347678   61989 cri.go:89] found id: ""
	I0924 01:05:59.347708   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.347718   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:59.347726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:59.347786   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:59.381296   61989 cri.go:89] found id: ""
	I0924 01:05:59.381328   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.381340   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:59.381352   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:59.381369   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:59.462939   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:59.462971   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:59.462990   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:59.544967   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:59.545004   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.585079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:59.585106   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:59.637897   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:59.637940   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:00.190924   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.192627   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.192648   61070 pod_ready.go:82] duration metric: took 4.008971718s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.192658   61070 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198586   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.198614   61070 pod_ready.go:82] duration metric: took 5.949433ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198627   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205306   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:03.205331   61070 pod_ready.go:82] duration metric: took 1.006696778s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205342   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:00.528770   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.529473   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:01.406620   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:03.407024   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.153289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:02.170582   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:02.170679   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:02.216700   61989 cri.go:89] found id: ""
	I0924 01:06:02.216722   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.216730   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:02.216736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:02.216793   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:02.292664   61989 cri.go:89] found id: ""
	I0924 01:06:02.292695   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.292706   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:02.292714   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:02.292780   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:02.349447   61989 cri.go:89] found id: ""
	I0924 01:06:02.349470   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.349481   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:02.349487   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:02.349557   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:02.390491   61989 cri.go:89] found id: ""
	I0924 01:06:02.390514   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.390535   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:02.390543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:02.390597   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:02.439330   61989 cri.go:89] found id: ""
	I0924 01:06:02.439355   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.439366   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:02.439373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:02.439432   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:02.476400   61989 cri.go:89] found id: ""
	I0924 01:06:02.476431   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.476439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:02.476445   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:02.476501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:02.511946   61989 cri.go:89] found id: ""
	I0924 01:06:02.511975   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.511983   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:02.511989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:02.512036   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:02.547526   61989 cri.go:89] found id: ""
	I0924 01:06:02.547554   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.547561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:02.547570   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:02.547580   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:02.619784   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:02.619805   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:02.619816   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:02.698597   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:02.698636   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:02.741381   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:02.741419   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:02.797965   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:02.798023   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.312059   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:05.326556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:05.326614   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:05.360973   61989 cri.go:89] found id: ""
	I0924 01:06:05.360999   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.361011   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:05.361018   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:05.361101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:05.394720   61989 cri.go:89] found id: ""
	I0924 01:06:05.394750   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.394760   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:05.394767   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:05.394831   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:05.432564   61989 cri.go:89] found id: ""
	I0924 01:06:05.432592   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.432603   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:05.432611   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:05.432673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:05.465424   61989 cri.go:89] found id: ""
	I0924 01:06:05.465467   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.465478   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:05.465484   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:05.465555   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:05.503656   61989 cri.go:89] found id: ""
	I0924 01:06:05.503684   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.503693   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:05.503699   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:05.503752   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:05.538128   61989 cri.go:89] found id: ""
	I0924 01:06:05.538160   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.538171   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:05.538179   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:05.538248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:05.571310   61989 cri.go:89] found id: ""
	I0924 01:06:05.571336   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.571346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:05.571353   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:05.571416   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:05.604038   61989 cri.go:89] found id: ""
	I0924 01:06:05.604062   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.604070   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:05.604079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:05.604094   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:05.657025   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:05.657068   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.671457   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:05.671483   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:05.747671   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:05.747701   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:05.747718   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:05.833248   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:05.833285   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:05.212622   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.711612   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.028130   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.527525   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.407057   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.407341   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.906549   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:08.372029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:08.386497   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:08.386564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:08.422998   61989 cri.go:89] found id: ""
	I0924 01:06:08.423029   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.423039   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:08.423047   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:08.423095   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:08.457009   61989 cri.go:89] found id: ""
	I0924 01:06:08.457037   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.457047   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:08.457052   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:08.457104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:08.489694   61989 cri.go:89] found id: ""
	I0924 01:06:08.489728   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.489740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:08.489750   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:08.489804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:08.521819   61989 cri.go:89] found id: ""
	I0924 01:06:08.521845   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.521856   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:08.521864   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:08.521922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:08.556422   61989 cri.go:89] found id: ""
	I0924 01:06:08.556453   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.556465   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:08.556472   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:08.556567   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:08.593802   61989 cri.go:89] found id: ""
	I0924 01:06:08.593828   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.593836   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:08.593842   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:08.593932   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:08.627569   61989 cri.go:89] found id: ""
	I0924 01:06:08.627592   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.627600   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:08.627605   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:08.627653   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:08.664728   61989 cri.go:89] found id: ""
	I0924 01:06:08.664758   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.664769   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:08.664780   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:08.664794   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.703546   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:08.703577   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:08.755612   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:08.755649   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:08.769957   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:08.769989   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:08.842732   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:08.842762   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:08.842789   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:10.211942   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.211973   61070 pod_ready.go:82] duration metric: took 7.006623705s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.211986   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217219   61070 pod_ready.go:93] pod "kube-proxy-fgmwc" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.217247   61070 pod_ready.go:82] duration metric: took 5.254551ms for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217260   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221959   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.221983   61070 pod_ready.go:82] duration metric: took 4.71607ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221996   61070 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:12.227911   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.527831   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.527917   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.028599   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.907394   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.407242   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.427424   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:11.440709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:11.440773   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:11.475537   61989 cri.go:89] found id: ""
	I0924 01:06:11.475564   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.475572   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:11.475577   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:11.475633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:11.512231   61989 cri.go:89] found id: ""
	I0924 01:06:11.512276   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.512285   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:11.512292   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:11.512365   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:11.549809   61989 cri.go:89] found id: ""
	I0924 01:06:11.549840   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.549852   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:11.549858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:11.549924   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:11.587451   61989 cri.go:89] found id: ""
	I0924 01:06:11.587481   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.587493   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:11.587500   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:11.587558   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:11.625109   61989 cri.go:89] found id: ""
	I0924 01:06:11.625135   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.625146   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:11.625154   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:11.625213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:11.660577   61989 cri.go:89] found id: ""
	I0924 01:06:11.660604   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.660616   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:11.660624   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:11.660683   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:11.703527   61989 cri.go:89] found id: ""
	I0924 01:06:11.703557   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.703569   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:11.703577   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:11.703646   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:11.740766   61989 cri.go:89] found id: ""
	I0924 01:06:11.740798   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.740810   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:11.740820   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:11.740836   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:11.803402   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:11.803448   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:11.819144   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:11.819178   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:11.896152   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:11.896173   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:11.896187   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.986284   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:11.986340   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.523669   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:14.537923   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:14.537990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:14.576092   61989 cri.go:89] found id: ""
	I0924 01:06:14.576128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.576140   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:14.576148   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:14.576213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:14.611985   61989 cri.go:89] found id: ""
	I0924 01:06:14.612020   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.612032   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:14.612039   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:14.612098   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:14.647640   61989 cri.go:89] found id: ""
	I0924 01:06:14.647667   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.647675   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:14.647682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:14.647746   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:14.685089   61989 cri.go:89] found id: ""
	I0924 01:06:14.685128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.685141   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:14.685150   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:14.685217   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:14.718694   61989 cri.go:89] found id: ""
	I0924 01:06:14.718729   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.718738   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:14.718745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:14.718810   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:14.754874   61989 cri.go:89] found id: ""
	I0924 01:06:14.754916   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.754928   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:14.754936   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:14.754993   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:14.789580   61989 cri.go:89] found id: ""
	I0924 01:06:14.789608   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.789617   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:14.789625   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:14.789677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:14.823173   61989 cri.go:89] found id: ""
	I0924 01:06:14.823201   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.823213   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:14.823224   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:14.823238   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:14.878398   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:14.878431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:14.892466   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:14.892502   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:14.965978   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:14.966010   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:14.966065   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:15.050557   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:15.050600   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.231644   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.728219   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.029325   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:18.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.907014   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:19.406893   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:17.596915   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:17.609585   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:17.609643   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:17.648275   61989 cri.go:89] found id: ""
	I0924 01:06:17.648305   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.648313   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:17.648319   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:17.648447   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:17.681447   61989 cri.go:89] found id: ""
	I0924 01:06:17.681473   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.681484   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:17.681491   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:17.681552   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:17.719202   61989 cri.go:89] found id: ""
	I0924 01:06:17.719226   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.719234   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:17.719240   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:17.719296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:17.752601   61989 cri.go:89] found id: ""
	I0924 01:06:17.752629   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.752641   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:17.752649   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:17.752700   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:17.789905   61989 cri.go:89] found id: ""
	I0924 01:06:17.789934   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.789945   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:17.789952   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:17.790015   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:17.824174   61989 cri.go:89] found id: ""
	I0924 01:06:17.824205   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.824217   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:17.824237   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:17.824296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:17.860647   61989 cri.go:89] found id: ""
	I0924 01:06:17.860674   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.860684   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:17.860691   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:17.860750   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:17.896392   61989 cri.go:89] found id: ""
	I0924 01:06:17.896414   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.896423   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:17.896437   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:17.896450   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:17.949230   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:17.949272   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:17.963125   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:17.963183   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:18.035092   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:18.035117   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:18.035134   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:18.117973   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:18.118011   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:20.657044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:20.669862   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:20.669936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:20.704672   61989 cri.go:89] found id: ""
	I0924 01:06:20.704703   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.704714   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:20.704722   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:20.704785   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:20.745777   61989 cri.go:89] found id: ""
	I0924 01:06:20.745801   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.745811   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:20.745818   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:20.745879   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:20.779673   61989 cri.go:89] found id: ""
	I0924 01:06:20.779704   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.779740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:20.779749   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:20.779809   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:20.815959   61989 cri.go:89] found id: ""
	I0924 01:06:20.815983   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.815992   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:20.815998   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:20.816055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:20.849203   61989 cri.go:89] found id: ""
	I0924 01:06:20.849232   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.849243   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:20.849251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:20.849319   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:20.884303   61989 cri.go:89] found id: ""
	I0924 01:06:20.884353   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.884365   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:20.884373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:20.884436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:20.921217   61989 cri.go:89] found id: ""
	I0924 01:06:20.921242   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.921249   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:20.921255   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:20.921302   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:20.957555   61989 cri.go:89] found id: ""
	I0924 01:06:20.957590   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.957601   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:20.957613   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:20.957628   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:20.972591   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:20.972630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:06:18.728553   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.730046   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.228040   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.527573   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:22.527695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:21.406963   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.907730   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	W0924 01:06:21.046506   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:21.046532   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:21.046547   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:21.129415   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:21.129453   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:21.168899   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:21.168924   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:23.720925   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:23.736893   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:23.736965   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:23.771874   61989 cri.go:89] found id: ""
	I0924 01:06:23.771901   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.771909   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:23.771915   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:23.771976   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:23.806892   61989 cri.go:89] found id: ""
	I0924 01:06:23.806924   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.806936   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:23.806943   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:23.806999   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:23.843661   61989 cri.go:89] found id: ""
	I0924 01:06:23.843686   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.843694   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:23.843700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:23.843753   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:23.878979   61989 cri.go:89] found id: ""
	I0924 01:06:23.879007   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.879019   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:23.879027   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:23.879086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:23.913893   61989 cri.go:89] found id: ""
	I0924 01:06:23.913916   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.913925   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:23.913937   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:23.913982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:23.947932   61989 cri.go:89] found id: ""
	I0924 01:06:23.947961   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.947972   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:23.947980   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:23.948045   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:23.981366   61989 cri.go:89] found id: ""
	I0924 01:06:23.981391   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.981402   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:23.981409   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:23.981467   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:24.014428   61989 cri.go:89] found id: ""
	I0924 01:06:24.014455   61989 logs.go:276] 0 containers: []
	W0924 01:06:24.014463   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:24.014471   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:24.014485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:24.029585   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:24.029621   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:24.095926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:24.095955   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:24.095975   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:24.174594   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:24.174635   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:24.213286   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:24.213311   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:25.229785   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.729021   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:25.027783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.030450   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.406776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:28.907135   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.764740   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:26.777184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:26.777279   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:26.812704   61989 cri.go:89] found id: ""
	I0924 01:06:26.812735   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.812746   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:26.812753   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:26.812811   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:26.849867   61989 cri.go:89] found id: ""
	I0924 01:06:26.849895   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.849904   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:26.849909   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:26.849958   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:26.882856   61989 cri.go:89] found id: ""
	I0924 01:06:26.882878   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.882885   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:26.882891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:26.882936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:26.921063   61989 cri.go:89] found id: ""
	I0924 01:06:26.921085   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.921094   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:26.921100   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:26.921156   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:26.961154   61989 cri.go:89] found id: ""
	I0924 01:06:26.961182   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.961194   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:26.961200   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:26.961257   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:26.994560   61989 cri.go:89] found id: ""
	I0924 01:06:26.994593   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.994603   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:26.994612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:26.994673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:27.027967   61989 cri.go:89] found id: ""
	I0924 01:06:27.028013   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.028026   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:27.028033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:27.028096   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:27.063099   61989 cri.go:89] found id: ""
	I0924 01:06:27.063130   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.063142   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:27.063153   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:27.063166   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:27.116237   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:27.116279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:27.130785   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:27.130815   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:27.201931   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:27.201954   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:27.201970   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:27.282182   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:27.282217   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:29.825403   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:29.838890   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:29.838989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:29.873651   61989 cri.go:89] found id: ""
	I0924 01:06:29.873678   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.873690   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:29.873698   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:29.873758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:29.909894   61989 cri.go:89] found id: ""
	I0924 01:06:29.909916   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.909923   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:29.909929   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:29.909978   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:29.944850   61989 cri.go:89] found id: ""
	I0924 01:06:29.944878   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.944886   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:29.944892   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:29.944945   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:29.981486   61989 cri.go:89] found id: ""
	I0924 01:06:29.981515   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.981524   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:29.981532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:29.981592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:30.015138   61989 cri.go:89] found id: ""
	I0924 01:06:30.015165   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.015176   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:30.015184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:30.015256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:30.051777   61989 cri.go:89] found id: ""
	I0924 01:06:30.051814   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.051825   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:30.051834   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:30.051898   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:30.085573   61989 cri.go:89] found id: ""
	I0924 01:06:30.085598   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.085607   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:30.085612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:30.085661   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:30.122518   61989 cri.go:89] found id: ""
	I0924 01:06:30.122551   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.122561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:30.122570   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:30.122585   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:30.199075   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:30.199118   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:30.238259   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:30.238293   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:30.292145   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:30.292185   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:30.306404   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:30.306431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:30.373959   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:29.729379   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.228691   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:29.527089   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:31.527523   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:34.027357   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:30.907575   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:33.407615   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.875041   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:32.888358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:32.888435   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:32.924466   61989 cri.go:89] found id: ""
	I0924 01:06:32.924499   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.924519   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:32.924528   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:32.924584   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:32.960188   61989 cri.go:89] found id: ""
	I0924 01:06:32.960216   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.960224   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:32.960231   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:32.960282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:32.997612   61989 cri.go:89] found id: ""
	I0924 01:06:32.997641   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.997649   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:32.997655   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:32.997704   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:33.034282   61989 cri.go:89] found id: ""
	I0924 01:06:33.034310   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.034317   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:33.034325   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:33.034381   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:33.073832   61989 cri.go:89] found id: ""
	I0924 01:06:33.073861   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.073870   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:33.073875   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:33.073959   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:33.107276   61989 cri.go:89] found id: ""
	I0924 01:06:33.107303   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.107314   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:33.107323   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:33.107373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:33.141062   61989 cri.go:89] found id: ""
	I0924 01:06:33.141091   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.141104   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:33.141112   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:33.141174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:33.177874   61989 cri.go:89] found id: ""
	I0924 01:06:33.177899   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.177908   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:33.177916   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:33.177927   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:33.228324   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:33.228373   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:33.241324   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:33.241350   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:33.313115   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:33.313139   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:33.313151   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:33.392458   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:33.392512   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:35.932822   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:35.945918   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:35.945987   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:34.727948   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.728560   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.028536   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:38.527308   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.906501   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:37.907165   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.984400   61989 cri.go:89] found id: ""
	I0924 01:06:35.984438   61989 logs.go:276] 0 containers: []
	W0924 01:06:35.984448   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:35.984456   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:35.984528   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:36.022208   61989 cri.go:89] found id: ""
	I0924 01:06:36.022235   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.022244   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:36.022252   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:36.022336   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:36.059153   61989 cri.go:89] found id: ""
	I0924 01:06:36.059176   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.059184   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:36.059190   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:36.059247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:36.094375   61989 cri.go:89] found id: ""
	I0924 01:06:36.094413   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.094425   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:36.094434   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:36.094490   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:36.128662   61989 cri.go:89] found id: ""
	I0924 01:06:36.128691   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.128702   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:36.128710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:36.128762   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:36.160898   61989 cri.go:89] found id: ""
	I0924 01:06:36.160925   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.160937   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:36.160945   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:36.161010   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:36.194421   61989 cri.go:89] found id: ""
	I0924 01:06:36.194448   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.194460   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:36.194468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:36.194537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:36.230448   61989 cri.go:89] found id: ""
	I0924 01:06:36.230477   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.230487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:36.230498   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:36.230511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:36.303029   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:36.303053   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:36.303067   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:36.406305   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:36.406338   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:36.444044   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:36.444084   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:36.494829   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:36.494873   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.009579   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:39.023867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:39.023943   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:39.057426   61989 cri.go:89] found id: ""
	I0924 01:06:39.057458   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.057469   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:39.057477   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:39.057539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:39.091421   61989 cri.go:89] found id: ""
	I0924 01:06:39.091444   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.091453   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:39.091459   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:39.091518   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:39.125407   61989 cri.go:89] found id: ""
	I0924 01:06:39.125437   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.125448   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:39.125455   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:39.125525   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:39.157146   61989 cri.go:89] found id: ""
	I0924 01:06:39.157170   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.157181   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:39.157189   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:39.157248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:39.189474   61989 cri.go:89] found id: ""
	I0924 01:06:39.189501   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.189511   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:39.189518   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:39.189577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:39.228034   61989 cri.go:89] found id: ""
	I0924 01:06:39.228063   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.228084   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:39.228099   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:39.228158   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:39.268289   61989 cri.go:89] found id: ""
	I0924 01:06:39.268317   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.268345   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:39.268354   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:39.268431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:39.304964   61989 cri.go:89] found id: ""
	I0924 01:06:39.304988   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.304996   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:39.305005   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:39.305017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:39.356193   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:39.356234   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.370782   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:39.370807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:39.442395   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:39.442418   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:39.442429   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:39.518426   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:39.518466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:38.729606   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:41.228528   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.528236   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:43.028285   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.407021   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.906884   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:44.907822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.059895   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:42.092776   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:42.092837   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:42.128508   61989 cri.go:89] found id: ""
	I0924 01:06:42.128534   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.128555   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:42.128565   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:42.128623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:42.160961   61989 cri.go:89] found id: ""
	I0924 01:06:42.160989   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.161000   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:42.161008   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:42.161072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:42.194212   61989 cri.go:89] found id: ""
	I0924 01:06:42.194260   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.194272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:42.194280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:42.194342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:42.229284   61989 cri.go:89] found id: ""
	I0924 01:06:42.229312   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.229323   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:42.229331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:42.229378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:42.261952   61989 cri.go:89] found id: ""
	I0924 01:06:42.261986   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.261997   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:42.262010   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:42.262059   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:42.297096   61989 cri.go:89] found id: ""
	I0924 01:06:42.297125   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.297133   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:42.297139   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:42.297185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:42.333066   61989 cri.go:89] found id: ""
	I0924 01:06:42.333095   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.333106   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:42.333114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:42.333176   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:42.366798   61989 cri.go:89] found id: ""
	I0924 01:06:42.366829   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.366840   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:42.366852   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:42.366865   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:42.419424   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:42.419466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:42.433814   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:42.433846   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:42.503817   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:42.503845   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:42.503860   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:42.583249   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:42.583289   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:45.123746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:45.136292   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:45.136377   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:45.174390   61989 cri.go:89] found id: ""
	I0924 01:06:45.174420   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.174441   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:45.174449   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:45.174539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:45.212394   61989 cri.go:89] found id: ""
	I0924 01:06:45.212422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.212433   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:45.212441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:45.212503   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:45.245831   61989 cri.go:89] found id: ""
	I0924 01:06:45.245853   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.245861   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:45.245867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:45.245922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:45.277587   61989 cri.go:89] found id: ""
	I0924 01:06:45.277615   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.277626   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:45.277634   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:45.277692   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:45.309715   61989 cri.go:89] found id: ""
	I0924 01:06:45.309749   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.309760   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:45.309768   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:45.309827   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:45.342799   61989 cri.go:89] found id: ""
	I0924 01:06:45.342831   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.342844   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:45.342853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:45.342921   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:45.375377   61989 cri.go:89] found id: ""
	I0924 01:06:45.375404   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.375415   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:45.375423   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:45.375484   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:45.415395   61989 cri.go:89] found id: ""
	I0924 01:06:45.415422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.415432   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:45.415444   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:45.415459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:45.464381   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:45.464416   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:45.478142   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:45.478168   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:45.551211   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:45.551234   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:45.551244   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:45.635255   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:45.635297   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:43.728645   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:46.227611   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.228320   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:45.028650   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.528968   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.406822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:49.407790   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.173687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:48.186635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:48.186710   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:48.219544   61989 cri.go:89] found id: ""
	I0924 01:06:48.219566   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.219574   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:48.219583   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:48.219654   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:48.253594   61989 cri.go:89] found id: ""
	I0924 01:06:48.253618   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.253627   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:48.253634   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:48.253693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:48.287991   61989 cri.go:89] found id: ""
	I0924 01:06:48.288019   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.288031   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:48.288041   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:48.288100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:48.320738   61989 cri.go:89] found id: ""
	I0924 01:06:48.320767   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.320779   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:48.320787   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:48.320847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:48.352197   61989 cri.go:89] found id: ""
	I0924 01:06:48.352225   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.352233   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:48.352243   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:48.352317   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:48.386157   61989 cri.go:89] found id: ""
	I0924 01:06:48.386187   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.386195   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:48.386202   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:48.386250   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:48.422372   61989 cri.go:89] found id: ""
	I0924 01:06:48.422398   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.422407   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:48.422413   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:48.422463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:48.464007   61989 cri.go:89] found id: ""
	I0924 01:06:48.464032   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.464043   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:48.464054   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:48.464072   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.520533   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:48.520570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:48.594453   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:48.594489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:48.607309   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:48.607336   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:48.674078   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:48.674102   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:48.674117   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:50.740093   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.228567   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:50.028640   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:52.527656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.906378   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.906887   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.256855   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:51.270305   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:51.270378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:51.303450   61989 cri.go:89] found id: ""
	I0924 01:06:51.303487   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.303499   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:51.303508   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:51.303564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:51.336959   61989 cri.go:89] found id: ""
	I0924 01:06:51.336987   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.337003   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:51.337010   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:51.337072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:51.369210   61989 cri.go:89] found id: ""
	I0924 01:06:51.369239   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.369249   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:51.369260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:51.369339   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:51.403595   61989 cri.go:89] found id: ""
	I0924 01:06:51.403645   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.403658   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:51.403666   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:51.403723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:51.445459   61989 cri.go:89] found id: ""
	I0924 01:06:51.445493   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.445503   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:51.445510   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:51.445574   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:51.477615   61989 cri.go:89] found id: ""
	I0924 01:06:51.477642   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.477653   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:51.477660   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:51.477722   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:51.509737   61989 cri.go:89] found id: ""
	I0924 01:06:51.509766   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.509784   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:51.509792   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:51.509856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:51.546451   61989 cri.go:89] found id: ""
	I0924 01:06:51.546479   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.546489   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:51.546501   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:51.546515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:51.600277   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:51.600315   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:51.613403   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:51.613434   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:51.691645   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:51.691669   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:51.691688   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.772276   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:51.772312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:54.313491   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:54.328265   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:54.328374   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:54.368091   61989 cri.go:89] found id: ""
	I0924 01:06:54.368117   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.368126   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:54.368131   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:54.368183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:54.408272   61989 cri.go:89] found id: ""
	I0924 01:06:54.408300   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.408310   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:54.408318   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:54.408409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:54.460467   61989 cri.go:89] found id: ""
	I0924 01:06:54.460489   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.460499   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:54.460506   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:54.460564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:54.493310   61989 cri.go:89] found id: ""
	I0924 01:06:54.493334   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.493343   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:54.493349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:54.493401   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:54.526772   61989 cri.go:89] found id: ""
	I0924 01:06:54.526799   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.526809   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:54.526817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:54.526880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:54.562235   61989 cri.go:89] found id: ""
	I0924 01:06:54.562264   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.562274   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:54.562283   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:54.562345   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:54.597755   61989 cri.go:89] found id: ""
	I0924 01:06:54.597784   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.597794   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:54.597803   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:54.597851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:54.632225   61989 cri.go:89] found id: ""
	I0924 01:06:54.632282   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.632295   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:54.632305   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:54.632321   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:54.683849   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:54.683887   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:54.697395   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:54.697425   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:54.767577   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:54.767598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:54.767609   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:54.842619   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:54.842655   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:55.728756   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:58.228520   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:54.528783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.028039   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:59.028234   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:55.907673   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.907858   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.381394   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:57.394078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:57.394147   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:57.431241   61989 cri.go:89] found id: ""
	I0924 01:06:57.431266   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.431278   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:57.431284   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:57.431352   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:57.468954   61989 cri.go:89] found id: ""
	I0924 01:06:57.468983   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.468994   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:57.469001   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:57.469060   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:57.503518   61989 cri.go:89] found id: ""
	I0924 01:06:57.503550   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.503562   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:57.503570   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:57.503618   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:57.540432   61989 cri.go:89] found id: ""
	I0924 01:06:57.540464   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.540475   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:57.540483   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:57.540548   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:57.574142   61989 cri.go:89] found id: ""
	I0924 01:06:57.574175   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.574187   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:57.574195   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:57.574264   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:57.608505   61989 cri.go:89] found id: ""
	I0924 01:06:57.608528   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.608537   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:57.608543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:57.608589   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:57.644273   61989 cri.go:89] found id: ""
	I0924 01:06:57.644305   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.644317   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:57.644344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:57.644409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:57.682023   61989 cri.go:89] found id: ""
	I0924 01:06:57.682050   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.682060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:57.682072   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:57.682086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:57.732537   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:57.732570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:57.746632   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:57.746663   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:57.813904   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:57.813927   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:57.813947   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:57.891947   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:57.891992   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.432035   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:00.444886   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:00.444966   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:00.482653   61989 cri.go:89] found id: ""
	I0924 01:07:00.482683   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.482694   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:00.482702   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:00.482754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:00.516404   61989 cri.go:89] found id: ""
	I0924 01:07:00.516441   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.516452   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:00.516463   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:00.516527   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:00.552938   61989 cri.go:89] found id: ""
	I0924 01:07:00.552963   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.552971   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:00.552977   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:00.553043   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:00.589143   61989 cri.go:89] found id: ""
	I0924 01:07:00.589170   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.589178   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:00.589184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:00.589235   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:00.625023   61989 cri.go:89] found id: ""
	I0924 01:07:00.625047   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.625059   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:00.625066   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:00.625127   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:00.662904   61989 cri.go:89] found id: ""
	I0924 01:07:00.662936   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.662948   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:00.662959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:00.663022   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:00.702892   61989 cri.go:89] found id: ""
	I0924 01:07:00.702921   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.702932   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:00.702938   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:00.702988   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:00.737010   61989 cri.go:89] found id: ""
	I0924 01:07:00.737039   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.737050   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:00.737061   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:00.737075   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:00.788093   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:00.788132   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:00.801354   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:00.801382   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:00.866830   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:00.866862   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:00.866878   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:00.950034   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:00.950076   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.728279   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.227980   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:01.527849   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.027729   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:00.406445   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:02.407048   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.907569   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.492773   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:03.506158   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:03.506224   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:03.542369   61989 cri.go:89] found id: ""
	I0924 01:07:03.542397   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.542408   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:03.542416   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:03.542473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:03.575019   61989 cri.go:89] found id: ""
	I0924 01:07:03.575046   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.575055   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:03.575060   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:03.575103   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:03.608576   61989 cri.go:89] found id: ""
	I0924 01:07:03.608603   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.608612   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:03.608619   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:03.608684   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:03.642359   61989 cri.go:89] found id: ""
	I0924 01:07:03.642389   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.642400   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:03.642407   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:03.642463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:03.678192   61989 cri.go:89] found id: ""
	I0924 01:07:03.678216   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.678223   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:03.678229   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:03.678285   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:03.711773   61989 cri.go:89] found id: ""
	I0924 01:07:03.711795   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.711803   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:03.711809   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:03.711856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:03.747792   61989 cri.go:89] found id: ""
	I0924 01:07:03.747819   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.747830   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:03.747838   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:03.747901   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:03.783284   61989 cri.go:89] found id: ""
	I0924 01:07:03.783312   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.783320   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:03.783331   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:03.783349   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:03.838704   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:03.838745   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:03.852650   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:03.852675   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:03.922474   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:03.922499   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:03.922511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:03.997349   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:03.997388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:05.228357   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:07.228789   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.028604   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:08.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.908041   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:09.406803   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.537182   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:06.549745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:06.549833   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:06.587879   61989 cri.go:89] found id: ""
	I0924 01:07:06.587910   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.587922   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:06.587930   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:06.587984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:06.623419   61989 cri.go:89] found id: ""
	I0924 01:07:06.623447   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.623456   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:06.623462   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:06.623542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:06.659228   61989 cri.go:89] found id: ""
	I0924 01:07:06.659260   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.659272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:06.659280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:06.659341   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:06.693300   61989 cri.go:89] found id: ""
	I0924 01:07:06.693330   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.693341   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:06.693349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:06.693399   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:06.726237   61989 cri.go:89] found id: ""
	I0924 01:07:06.726267   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.726278   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:06.726286   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:06.726342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:06.760627   61989 cri.go:89] found id: ""
	I0924 01:07:06.760659   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.760670   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:06.760677   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:06.760745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:06.796029   61989 cri.go:89] found id: ""
	I0924 01:07:06.796062   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.796073   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:06.796081   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:06.796136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:06.830197   61989 cri.go:89] found id: ""
	I0924 01:07:06.830230   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.830241   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:06.830251   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:06.830265   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.869055   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:06.869087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:06.923840   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:06.923888   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:06.937510   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:06.937549   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:07.011461   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:07.011482   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:07.011496   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:09.591186   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:09.603900   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:09.603970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:09.639003   61989 cri.go:89] found id: ""
	I0924 01:07:09.639035   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.639046   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:09.639055   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:09.639111   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:09.676494   61989 cri.go:89] found id: ""
	I0924 01:07:09.676528   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.676539   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:09.676547   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:09.676616   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:09.713080   61989 cri.go:89] found id: ""
	I0924 01:07:09.713103   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.713111   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:09.713117   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:09.713174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:09.748425   61989 cri.go:89] found id: ""
	I0924 01:07:09.748449   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.748458   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:09.748465   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:09.748521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:09.782526   61989 cri.go:89] found id: ""
	I0924 01:07:09.782559   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.782576   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:09.782584   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:09.782647   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:09.819137   61989 cri.go:89] found id: ""
	I0924 01:07:09.819159   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.819167   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:09.819173   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:09.819256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:09.852953   61989 cri.go:89] found id: ""
	I0924 01:07:09.852976   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.852984   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:09.852989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:09.853083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:09.887254   61989 cri.go:89] found id: ""
	I0924 01:07:09.887282   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.887293   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:09.887304   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:09.887318   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:09.940029   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:09.940069   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:09.954298   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:09.954331   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:10.028926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:10.028947   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:10.028957   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:10.116722   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:10.116761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:09.728996   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.228342   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:10.527637   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.528324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:11.410452   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:13.906451   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.654245   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:12.668635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:12.668695   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:12.711575   61989 cri.go:89] found id: ""
	I0924 01:07:12.711601   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.711626   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:12.711632   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:12.711682   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:12.746104   61989 cri.go:89] found id: ""
	I0924 01:07:12.746131   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.746141   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:12.746149   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:12.746210   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:12.780229   61989 cri.go:89] found id: ""
	I0924 01:07:12.780260   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.780295   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:12.780303   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:12.780384   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:12.812968   61989 cri.go:89] found id: ""
	I0924 01:07:12.812998   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.813010   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:12.813024   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:12.813090   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:12.844212   61989 cri.go:89] found id: ""
	I0924 01:07:12.844241   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.844253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:12.844260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:12.844343   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:12.878662   61989 cri.go:89] found id: ""
	I0924 01:07:12.878690   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.878700   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:12.878707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:12.878765   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:12.912782   61989 cri.go:89] found id: ""
	I0924 01:07:12.912805   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.912815   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:12.912822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:12.912883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:12.945694   61989 cri.go:89] found id: ""
	I0924 01:07:12.945726   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.945736   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:12.945747   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:12.945761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:12.994841   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:12.994877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:13.009582   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:13.009624   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:13.081972   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:13.081999   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:13.082017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:13.162383   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:13.162420   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:15.704586   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:15.717608   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:15.717677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:15.751794   61989 cri.go:89] found id: ""
	I0924 01:07:15.751829   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.751840   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:15.751848   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:15.751916   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:15.791691   61989 cri.go:89] found id: ""
	I0924 01:07:15.791723   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.791734   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:15.791742   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:15.791805   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:15.827934   61989 cri.go:89] found id: ""
	I0924 01:07:15.827957   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.827965   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:15.827971   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:15.828017   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:15.862489   61989 cri.go:89] found id: ""
	I0924 01:07:15.862518   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.862527   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:15.862532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:15.862577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:15.896754   61989 cri.go:89] found id: ""
	I0924 01:07:15.896786   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.896798   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:15.896804   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:15.896857   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:15.934353   61989 cri.go:89] found id: ""
	I0924 01:07:15.934378   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.934386   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:15.934392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:15.934436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:15.969204   61989 cri.go:89] found id: ""
	I0924 01:07:15.969237   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.969246   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:15.969251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:15.969309   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:14.228949   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.728382   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.027681   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:17.027847   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.907872   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:18.407563   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.008733   61989 cri.go:89] found id: ""
	I0924 01:07:16.008767   61989 logs.go:276] 0 containers: []
	W0924 01:07:16.008780   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:16.008792   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:16.008807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:16.046993   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:16.047024   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:16.098768   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:16.098801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:16.114429   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:16.114472   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:16.187450   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:16.187469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:16.187489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:18.767042   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:18.779825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:18.779899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:18.815410   61989 cri.go:89] found id: ""
	I0924 01:07:18.815436   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.815447   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:18.815454   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:18.815523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:18.849837   61989 cri.go:89] found id: ""
	I0924 01:07:18.849862   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.849872   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:18.849880   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:18.849952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:18.885183   61989 cri.go:89] found id: ""
	I0924 01:07:18.885215   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.885227   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:18.885235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:18.885314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:18.922263   61989 cri.go:89] found id: ""
	I0924 01:07:18.922293   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.922305   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:18.922312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:18.922378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:18.957235   61989 cri.go:89] found id: ""
	I0924 01:07:18.957263   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.957272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:18.957278   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:18.957331   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:18.989846   61989 cri.go:89] found id: ""
	I0924 01:07:18.989870   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.989878   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:18.989884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:18.989931   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:19.027264   61989 cri.go:89] found id: ""
	I0924 01:07:19.027298   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.027308   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:19.027315   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:19.027373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:19.065902   61989 cri.go:89] found id: ""
	I0924 01:07:19.065925   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.065934   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:19.065944   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:19.065959   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:19.115515   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:19.115550   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:19.129761   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:19.129787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:19.200299   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:19.200319   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:19.200351   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:19.282308   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:19.282360   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:18.732314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.227773   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.228957   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:19.528117   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:22.028965   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:20.906860   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.407404   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.819442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:21.834106   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:21.834165   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:21.866953   61989 cri.go:89] found id: ""
	I0924 01:07:21.866988   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.866999   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:21.867008   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:21.867085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:21.902561   61989 cri.go:89] found id: ""
	I0924 01:07:21.902637   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.902654   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:21.902663   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:21.902729   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:21.936883   61989 cri.go:89] found id: ""
	I0924 01:07:21.936926   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.936937   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:21.936943   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:21.936995   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:21.975375   61989 cri.go:89] found id: ""
	I0924 01:07:21.975402   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.975411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:21.975417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:21.975465   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:22.012782   61989 cri.go:89] found id: ""
	I0924 01:07:22.012811   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.012822   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:22.012830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:22.012890   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:22.049344   61989 cri.go:89] found id: ""
	I0924 01:07:22.049370   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.049379   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:22.049385   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:22.049442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:22.088187   61989 cri.go:89] found id: ""
	I0924 01:07:22.088219   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.088230   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:22.088239   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:22.088324   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:22.123357   61989 cri.go:89] found id: ""
	I0924 01:07:22.123386   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.123397   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:22.123408   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:22.123423   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:22.176794   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:22.176828   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:22.192550   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:22.192591   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:22.263854   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:22.263881   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:22.263898   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:22.341735   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:22.341778   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:24.879834   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:24.892429   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:24.892504   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:24.926600   61989 cri.go:89] found id: ""
	I0924 01:07:24.926629   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.926636   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:24.926642   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:24.926689   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:24.960370   61989 cri.go:89] found id: ""
	I0924 01:07:24.960399   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.960408   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:24.960415   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:24.960471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:24.993503   61989 cri.go:89] found id: ""
	I0924 01:07:24.993532   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.993542   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:24.993549   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:24.993611   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:25.028027   61989 cri.go:89] found id: ""
	I0924 01:07:25.028055   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.028065   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:25.028073   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:25.028129   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:25.062947   61989 cri.go:89] found id: ""
	I0924 01:07:25.062981   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.062999   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:25.063009   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:25.063077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:25.098895   61989 cri.go:89] found id: ""
	I0924 01:07:25.098927   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.098939   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:25.098946   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:25.098996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:25.132786   61989 cri.go:89] found id: ""
	I0924 01:07:25.132814   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.132824   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:25.132830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:25.132882   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:25.167603   61989 cri.go:89] found id: ""
	I0924 01:07:25.167634   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.167644   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:25.167656   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:25.167671   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:25.220265   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:25.220303   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:25.234840   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:25.234884   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:25.307459   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:25.307485   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:25.307499   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:25.386496   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:25.386537   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:25.229188   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.728978   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:24.531829   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.027182   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:29.029000   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:25.907018   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:28.406555   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.926064   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:27.939398   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:27.939480   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:27.976184   61989 cri.go:89] found id: ""
	I0924 01:07:27.976215   61989 logs.go:276] 0 containers: []
	W0924 01:07:27.976256   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:27.976265   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:27.976348   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:28.009389   61989 cri.go:89] found id: ""
	I0924 01:07:28.009419   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.009431   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:28.009438   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:28.009501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:28.045562   61989 cri.go:89] found id: ""
	I0924 01:07:28.045594   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.045605   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:28.045613   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:28.045677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:28.085318   61989 cri.go:89] found id: ""
	I0924 01:07:28.085345   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.085357   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:28.085364   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:28.085419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:28.119582   61989 cri.go:89] found id: ""
	I0924 01:07:28.119607   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.119617   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:28.119626   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:28.119690   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:28.151445   61989 cri.go:89] found id: ""
	I0924 01:07:28.151493   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.151505   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:28.151513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:28.151578   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:28.185966   61989 cri.go:89] found id: ""
	I0924 01:07:28.185997   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.186009   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:28.186016   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:28.186078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:28.219012   61989 cri.go:89] found id: ""
	I0924 01:07:28.219037   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.219044   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:28.219052   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:28.219089   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:28.272186   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:28.272222   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:28.286346   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:28.286383   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:28.370949   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:28.370975   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:28.370985   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:28.453740   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:28.453775   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:30.229141   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.728919   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:31.527080   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.028315   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.407040   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.407075   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.407711   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.993536   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:31.006297   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:31.006369   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:31.042081   61989 cri.go:89] found id: ""
	I0924 01:07:31.042114   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.042123   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:31.042129   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:31.042185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:31.077119   61989 cri.go:89] found id: ""
	I0924 01:07:31.077144   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.077153   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:31.077159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:31.077208   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:31.110148   61989 cri.go:89] found id: ""
	I0924 01:07:31.110179   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.110187   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:31.110193   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:31.110246   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:31.143551   61989 cri.go:89] found id: ""
	I0924 01:07:31.143578   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.143585   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:31.143591   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:31.143638   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:31.177212   61989 cri.go:89] found id: ""
	I0924 01:07:31.177262   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.177272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:31.177279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:31.177329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:31.209290   61989 cri.go:89] found id: ""
	I0924 01:07:31.209321   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.209332   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:31.209340   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:31.209398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:31.247299   61989 cri.go:89] found id: ""
	I0924 01:07:31.247334   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.247346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:31.247355   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:31.247419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:31.285010   61989 cri.go:89] found id: ""
	I0924 01:07:31.285047   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.285060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:31.285072   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:31.285087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:31.323819   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:31.323855   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:31.378348   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:31.378388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:31.393944   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:31.393983   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:31.464940   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:31.464966   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:31.464978   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.042144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:34.055183   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:34.055268   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:34.103044   61989 cri.go:89] found id: ""
	I0924 01:07:34.103075   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.103086   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:34.103094   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:34.103162   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:34.141379   61989 cri.go:89] found id: ""
	I0924 01:07:34.141412   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.141424   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:34.141432   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:34.141493   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:34.179545   61989 cri.go:89] found id: ""
	I0924 01:07:34.179574   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.179582   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:34.179588   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:34.179655   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:34.217683   61989 cri.go:89] found id: ""
	I0924 01:07:34.217719   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.217739   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:34.217748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:34.217806   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:34.257597   61989 cri.go:89] found id: ""
	I0924 01:07:34.257630   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.257642   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:34.257651   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:34.257723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:34.295410   61989 cri.go:89] found id: ""
	I0924 01:07:34.295440   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.295452   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:34.295460   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:34.295523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:34.331309   61989 cri.go:89] found id: ""
	I0924 01:07:34.331340   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.331350   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:34.331358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:34.331460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:34.367549   61989 cri.go:89] found id: ""
	I0924 01:07:34.367580   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.367590   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:34.367601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:34.367615   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:34.421785   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:34.421823   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:34.435162   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:34.435198   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:34.504051   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:34.504073   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:34.504090   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.582343   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:34.582384   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:35.229391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.229522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.527047   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.527472   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.906974   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.907529   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.124727   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:37.139374   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:37.139431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:37.176474   61989 cri.go:89] found id: ""
	I0924 01:07:37.176500   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.176510   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:37.176515   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:37.176560   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:37.209944   61989 cri.go:89] found id: ""
	I0924 01:07:37.209971   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.209983   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:37.209990   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:37.210055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:37.242894   61989 cri.go:89] found id: ""
	I0924 01:07:37.242923   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.242933   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:37.242941   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:37.242996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:37.276517   61989 cri.go:89] found id: ""
	I0924 01:07:37.276547   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.276558   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:37.276566   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:37.276626   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:37.310169   61989 cri.go:89] found id: ""
	I0924 01:07:37.310196   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.310207   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:37.310214   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:37.310282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:37.342992   61989 cri.go:89] found id: ""
	I0924 01:07:37.343019   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.343027   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:37.343035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:37.343088   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:37.375024   61989 cri.go:89] found id: ""
	I0924 01:07:37.375051   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.375062   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:37.375069   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:37.375137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:37.409736   61989 cri.go:89] found id: ""
	I0924 01:07:37.409761   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.409768   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:37.409776   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:37.409787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:37.474744   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:37.474767   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:37.474783   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:37.551479   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:37.551515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.590597   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:37.590632   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:37.642781   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:37.642820   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.156480   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:40.171002   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:40.171079   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:40.207383   61989 cri.go:89] found id: ""
	I0924 01:07:40.207410   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.207418   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:40.207424   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:40.207474   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:40.245535   61989 cri.go:89] found id: ""
	I0924 01:07:40.245560   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.245568   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:40.245574   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:40.245620   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:40.283858   61989 cri.go:89] found id: ""
	I0924 01:07:40.283888   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.283900   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:40.283909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:40.283982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:40.320527   61989 cri.go:89] found id: ""
	I0924 01:07:40.320555   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.320566   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:40.320575   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:40.320633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:40.354364   61989 cri.go:89] found id: ""
	I0924 01:07:40.354390   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.354397   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:40.354403   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:40.354473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:40.388407   61989 cri.go:89] found id: ""
	I0924 01:07:40.388431   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.388439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:40.388444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:40.388512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:40.423809   61989 cri.go:89] found id: ""
	I0924 01:07:40.423838   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.423847   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:40.423853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:40.423908   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:40.459160   61989 cri.go:89] found id: ""
	I0924 01:07:40.459188   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.459199   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:40.459210   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:40.459223   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:40.530418   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:40.530456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.551644   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:40.551683   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:40.634564   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:40.634587   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:40.634599   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:40.717897   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:40.717934   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:39.728642   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.728725   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:40.528294   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.028364   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.406835   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.907015   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.257992   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:43.272134   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:43.272204   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:43.306747   61989 cri.go:89] found id: ""
	I0924 01:07:43.306775   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.306797   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:43.306806   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:43.306923   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:43.342922   61989 cri.go:89] found id: ""
	I0924 01:07:43.342954   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.342963   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:43.342974   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:43.343028   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:43.378666   61989 cri.go:89] found id: ""
	I0924 01:07:43.378694   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.378703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:43.378709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:43.378760   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:43.414348   61989 cri.go:89] found id: ""
	I0924 01:07:43.414376   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.414387   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:43.414395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:43.414457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:43.447687   61989 cri.go:89] found id: ""
	I0924 01:07:43.447718   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.447728   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:43.447735   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:43.447804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:43.482166   61989 cri.go:89] found id: ""
	I0924 01:07:43.482195   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.482205   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:43.482211   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:43.482275   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:43.518112   61989 cri.go:89] found id: ""
	I0924 01:07:43.518146   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.518159   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:43.518167   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:43.518231   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:43.553853   61989 cri.go:89] found id: ""
	I0924 01:07:43.553875   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.553883   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:43.553891   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:43.553902   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:43.603410   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:43.603445   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:43.616413   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:43.616438   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:43.685077   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:43.685101   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:43.685113   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:43.760758   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:43.760803   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:43.729237   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.228084   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.228503   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:45.527095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:47.529540   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.407150   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.407253   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.300532   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:46.315982   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:46.316050   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:46.356523   61989 cri.go:89] found id: ""
	I0924 01:07:46.356554   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.356565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:46.356573   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:46.356633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:46.405399   61989 cri.go:89] found id: ""
	I0924 01:07:46.405429   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.405439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:46.405447   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:46.405512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:46.454819   61989 cri.go:89] found id: ""
	I0924 01:07:46.454844   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.454853   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:46.454858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:46.454918   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:46.499094   61989 cri.go:89] found id: ""
	I0924 01:07:46.499123   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.499134   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:46.499142   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:46.499196   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:46.532976   61989 cri.go:89] found id: ""
	I0924 01:07:46.533006   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.533017   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:46.533025   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:46.533083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:46.565488   61989 cri.go:89] found id: ""
	I0924 01:07:46.565523   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.565534   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:46.565546   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:46.565610   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:46.598457   61989 cri.go:89] found id: ""
	I0924 01:07:46.598486   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.598496   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:46.598503   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:46.598551   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:46.631892   61989 cri.go:89] found id: ""
	I0924 01:07:46.631920   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.631931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:46.631941   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:46.631956   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:46.709966   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:46.710013   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.749154   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:46.749184   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:46.798192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:46.798228   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:46.811902   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:46.811951   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:46.885878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.386775   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:49.399324   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:49.399383   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:49.437061   61989 cri.go:89] found id: ""
	I0924 01:07:49.437092   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.437104   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:49.437111   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:49.437160   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:49.470882   61989 cri.go:89] found id: ""
	I0924 01:07:49.470908   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.470919   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:49.470927   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:49.470989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:49.506894   61989 cri.go:89] found id: ""
	I0924 01:07:49.506926   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.506938   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:49.506947   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:49.507018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:49.540768   61989 cri.go:89] found id: ""
	I0924 01:07:49.540800   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.540813   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:49.540822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:49.540888   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:49.576486   61989 cri.go:89] found id: ""
	I0924 01:07:49.576515   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.576523   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:49.576530   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:49.576579   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:49.612456   61989 cri.go:89] found id: ""
	I0924 01:07:49.612479   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.612487   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:49.612495   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:49.612542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:49.646085   61989 cri.go:89] found id: ""
	I0924 01:07:49.646118   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.646127   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:49.646132   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:49.646178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:49.682538   61989 cri.go:89] found id: ""
	I0924 01:07:49.682565   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.682574   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:49.682583   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:49.682594   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:49.721791   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:49.721817   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:49.774842   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:49.774889   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:49.789082   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:49.789129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:49.866437   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.866464   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:49.866478   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:50.727581   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.027396   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.028176   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.407654   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.908118   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.445166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:52.459060   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:52.459126   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:52.496521   61989 cri.go:89] found id: ""
	I0924 01:07:52.496550   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.496562   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:52.496571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:52.496652   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:52.533575   61989 cri.go:89] found id: ""
	I0924 01:07:52.533600   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.533608   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:52.533615   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:52.533693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:52.571666   61989 cri.go:89] found id: ""
	I0924 01:07:52.571693   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.571703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:52.571710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:52.571758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:52.603929   61989 cri.go:89] found id: ""
	I0924 01:07:52.603957   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.603968   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:52.603976   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:52.604034   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:52.635581   61989 cri.go:89] found id: ""
	I0924 01:07:52.635607   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.635614   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:52.635620   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:52.635669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:52.673865   61989 cri.go:89] found id: ""
	I0924 01:07:52.673889   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.673897   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:52.673903   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:52.673953   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:52.709885   61989 cri.go:89] found id: ""
	I0924 01:07:52.709910   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.709918   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:52.709925   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:52.709986   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:52.746409   61989 cri.go:89] found id: ""
	I0924 01:07:52.746439   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.746450   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:52.746461   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:52.746475   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:52.798020   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:52.798054   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:52.811940   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:52.811967   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:52.888091   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:52.888114   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:52.888129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.968955   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:52.969000   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:55.507204   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:55.520581   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:55.520657   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:55.555772   61989 cri.go:89] found id: ""
	I0924 01:07:55.555809   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.555821   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:55.555828   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:55.555880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:55.593765   61989 cri.go:89] found id: ""
	I0924 01:07:55.593791   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.593802   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:55.593808   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:55.593866   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:55.630292   61989 cri.go:89] found id: ""
	I0924 01:07:55.630325   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.630337   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:55.630344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:55.630408   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:55.665703   61989 cri.go:89] found id: ""
	I0924 01:07:55.665730   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.665741   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:55.665748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:55.665807   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:55.701911   61989 cri.go:89] found id: ""
	I0924 01:07:55.701938   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.701949   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:55.701957   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:55.702020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:55.734343   61989 cri.go:89] found id: ""
	I0924 01:07:55.734373   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.734385   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:55.734394   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:55.734460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:55.768606   61989 cri.go:89] found id: ""
	I0924 01:07:55.768633   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.768645   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:55.768653   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:55.768716   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:55.800720   61989 cri.go:89] found id: ""
	I0924 01:07:55.800747   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.800757   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:55.800768   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:55.800782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:55.851702   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:55.851737   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:55.865657   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:55.865687   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:55.940175   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:55.940197   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:55.940207   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:55.227954   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.228969   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:54.528417   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.529326   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:59.027653   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:55.407038   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.906886   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.015832   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:56.015870   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:58.557571   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:58.572208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:58.572274   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:58.605081   61989 cri.go:89] found id: ""
	I0924 01:07:58.605109   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.605121   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:58.605128   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:58.605185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:58.641518   61989 cri.go:89] found id: ""
	I0924 01:07:58.641548   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.641559   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:58.641566   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:58.641617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:58.680623   61989 cri.go:89] found id: ""
	I0924 01:07:58.680653   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.680664   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:58.680675   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:58.680735   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:58.713658   61989 cri.go:89] found id: ""
	I0924 01:07:58.713684   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.713693   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:58.713700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:58.713754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:58.746264   61989 cri.go:89] found id: ""
	I0924 01:07:58.746298   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.746307   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:58.746313   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:58.746358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:58.779812   61989 cri.go:89] found id: ""
	I0924 01:07:58.779846   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.779912   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:58.779924   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:58.779984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:58.813203   61989 cri.go:89] found id: ""
	I0924 01:07:58.813236   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.813245   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:58.813252   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:58.813303   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:58.845872   61989 cri.go:89] found id: ""
	I0924 01:07:58.845898   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.845906   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:58.845915   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:58.845925   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:58.897480   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:58.897515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:58.912904   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:58.912936   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:58.982882   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:58.982908   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:58.982921   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:59.058495   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:59.058535   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:59.729215   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.228358   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.028678   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:03.527682   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:00.407897   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.907608   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:04.907717   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.596672   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:01.609550   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:01.609625   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:01.648819   61989 cri.go:89] found id: ""
	I0924 01:08:01.648847   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.648857   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:01.648864   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:01.649000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:01.685419   61989 cri.go:89] found id: ""
	I0924 01:08:01.685450   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.685458   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:01.685464   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:01.685533   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:01.720426   61989 cri.go:89] found id: ""
	I0924 01:08:01.720455   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.720464   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:01.720473   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:01.720537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:01.755292   61989 cri.go:89] found id: ""
	I0924 01:08:01.755316   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.755324   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:01.755331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:01.755398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:01.788673   61989 cri.go:89] found id: ""
	I0924 01:08:01.788703   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.788713   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:01.788721   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:01.788789   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:01.824724   61989 cri.go:89] found id: ""
	I0924 01:08:01.824761   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.824773   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:01.824781   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:01.824838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:01.858492   61989 cri.go:89] found id: ""
	I0924 01:08:01.858531   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.858542   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:01.858556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:01.858623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:01.892135   61989 cri.go:89] found id: ""
	I0924 01:08:01.892167   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.892177   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:01.892192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:01.892205   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:01.905820   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:01.905849   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:01.977998   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:01.978026   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:01.978039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:02.060441   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:02.060480   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:02.100029   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:02.100057   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.653124   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:04.665726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:04.665784   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:04.700755   61989 cri.go:89] found id: ""
	I0924 01:08:04.700785   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.700796   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:04.700804   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:04.700858   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:04.736955   61989 cri.go:89] found id: ""
	I0924 01:08:04.736983   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.736992   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:04.736998   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:04.737051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:04.770940   61989 cri.go:89] found id: ""
	I0924 01:08:04.770969   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.770977   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:04.770983   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:04.771051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:04.805376   61989 cri.go:89] found id: ""
	I0924 01:08:04.805403   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.805411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:04.805417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:04.805471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:04.840995   61989 cri.go:89] found id: ""
	I0924 01:08:04.841016   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.841024   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:04.841030   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:04.841077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:04.875418   61989 cri.go:89] found id: ""
	I0924 01:08:04.875449   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.875460   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:04.875468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:04.875546   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:04.910675   61989 cri.go:89] found id: ""
	I0924 01:08:04.910696   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.910704   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:04.910710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:04.910764   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:04.945531   61989 cri.go:89] found id: ""
	I0924 01:08:04.945562   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.945570   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:04.945578   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:04.945589   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.997696   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:04.997734   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:05.011296   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:05.011329   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:05.087878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:05.087905   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:05.087919   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:05.164073   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:05.164111   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:04.228985   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.734525   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.031377   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:08.528160   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.908017   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:09.407255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:07.713496   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:07.726590   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:07.726649   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:07.760050   61989 cri.go:89] found id: ""
	I0924 01:08:07.760081   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.760092   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:07.760100   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:07.760152   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:07.797709   61989 cri.go:89] found id: ""
	I0924 01:08:07.797736   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.797744   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:07.797749   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:07.797803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:07.836351   61989 cri.go:89] found id: ""
	I0924 01:08:07.836380   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.836391   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:07.836399   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:07.836471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:07.871133   61989 cri.go:89] found id: ""
	I0924 01:08:07.871159   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.871170   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:07.871178   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:07.871229   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:07.906640   61989 cri.go:89] found id: ""
	I0924 01:08:07.906663   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.906673   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:07.906682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:07.906741   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:07.940919   61989 cri.go:89] found id: ""
	I0924 01:08:07.940945   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.940953   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:07.940959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:07.941018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:07.975533   61989 cri.go:89] found id: ""
	I0924 01:08:07.975562   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.975570   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:07.975576   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:07.975627   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:08.009137   61989 cri.go:89] found id: ""
	I0924 01:08:08.009163   61989 logs.go:276] 0 containers: []
	W0924 01:08:08.009173   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:08.009183   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:08.009196   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:08.065199   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:08.065252   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:08.080159   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:08.080188   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:08.154003   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:08.154025   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:08.154039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:08.235522   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:08.235561   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:10.774666   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:10.787704   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:10.787775   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:10.822721   61989 cri.go:89] found id: ""
	I0924 01:08:10.822759   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.822770   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:10.822781   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:10.822852   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:10.857113   61989 cri.go:89] found id: ""
	I0924 01:08:10.857138   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.857146   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:10.857152   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:10.857201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:10.890974   61989 cri.go:89] found id: ""
	I0924 01:08:10.891001   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.891012   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:10.891020   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:10.891086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:10.929771   61989 cri.go:89] found id: ""
	I0924 01:08:10.929793   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.929800   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:10.929806   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:10.929851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:10.961988   61989 cri.go:89] found id: ""
	I0924 01:08:10.962015   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.962027   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:10.962035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:10.962100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:09.228600   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.729142   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.528626   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.027656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.906981   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.907232   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.993591   61989 cri.go:89] found id: ""
	I0924 01:08:10.993622   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.993633   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:10.993639   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:10.993691   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:11.032468   61989 cri.go:89] found id: ""
	I0924 01:08:11.032496   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.032506   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:11.032514   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:11.032576   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:11.066900   61989 cri.go:89] found id: ""
	I0924 01:08:11.066924   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.066931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:11.066939   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:11.066950   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:11.136412   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:11.136443   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:11.136459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:11.218326   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:11.218361   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:11.260695   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:11.260728   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:11.310102   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:11.310133   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:13.825540   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:13.838208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:13.838283   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:13.874539   61989 cri.go:89] found id: ""
	I0924 01:08:13.874567   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.874576   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:13.874581   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:13.874628   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:13.911818   61989 cri.go:89] found id: ""
	I0924 01:08:13.911839   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.911846   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:13.911852   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:13.911897   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:13.944766   61989 cri.go:89] found id: ""
	I0924 01:08:13.944789   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.944797   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:13.944802   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:13.944847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:13.980712   61989 cri.go:89] found id: ""
	I0924 01:08:13.980742   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.980752   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:13.980758   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:13.980817   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:14.016103   61989 cri.go:89] found id: ""
	I0924 01:08:14.016130   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.016138   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:14.016143   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:14.016192   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:14.051884   61989 cri.go:89] found id: ""
	I0924 01:08:14.051929   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.051943   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:14.051954   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:14.052046   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:14.088928   61989 cri.go:89] found id: ""
	I0924 01:08:14.088954   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.088964   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:14.088970   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:14.089020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:14.123057   61989 cri.go:89] found id: ""
	I0924 01:08:14.123083   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.123091   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:14.123099   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:14.123112   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:14.174249   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:14.174287   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:14.188409   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:14.188442   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:14.258906   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:14.258932   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:14.258942   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:14.340891   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:14.340928   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:14.229459   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:16.728316   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.028158   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.527615   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.907490   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.907845   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.901512   61323 pod_ready.go:82] duration metric: took 4m0.001092501s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:19.901552   61323 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:08:19.901576   61323 pod_ready.go:39] duration metric: took 4m10.04955154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:19.901606   61323 kubeadm.go:597] duration metric: took 4m18.184472182s to restartPrimaryControlPlane
	W0924 01:08:19.901701   61323 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:19.901736   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:16.877728   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:16.890548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:16.890617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:16.924414   61989 cri.go:89] found id: ""
	I0924 01:08:16.924439   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.924451   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:16.924458   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:16.924510   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:16.960295   61989 cri.go:89] found id: ""
	I0924 01:08:16.960323   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.960344   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:16.960352   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:16.960405   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:16.993171   61989 cri.go:89] found id: ""
	I0924 01:08:16.993204   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.993216   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:16.993224   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:16.993287   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:17.028122   61989 cri.go:89] found id: ""
	I0924 01:08:17.028150   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.028160   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:17.028169   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:17.028261   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:17.068401   61989 cri.go:89] found id: ""
	I0924 01:08:17.068440   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.068451   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:17.068458   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:17.068530   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:17.104250   61989 cri.go:89] found id: ""
	I0924 01:08:17.104275   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.104283   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:17.104299   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:17.104370   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:17.139178   61989 cri.go:89] found id: ""
	I0924 01:08:17.139201   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.139209   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:17.139215   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:17.139288   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:17.172677   61989 cri.go:89] found id: ""
	I0924 01:08:17.172703   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.172712   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:17.172727   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:17.172742   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:17.222039   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:17.222082   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:17.235342   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:17.235371   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:17.300313   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:17.300350   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:17.300366   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:17.382465   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:17.382517   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.924928   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:19.941406   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:19.941496   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:19.976196   61989 cri.go:89] found id: ""
	I0924 01:08:19.976224   61989 logs.go:276] 0 containers: []
	W0924 01:08:19.976238   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:19.976247   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:19.976314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:20.019652   61989 cri.go:89] found id: ""
	I0924 01:08:20.019680   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.019692   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:20.019699   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:20.019757   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:20.055098   61989 cri.go:89] found id: ""
	I0924 01:08:20.055123   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.055130   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:20.055135   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:20.055183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:20.091428   61989 cri.go:89] found id: ""
	I0924 01:08:20.091458   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.091469   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:20.091476   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:20.091532   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:20.123608   61989 cri.go:89] found id: ""
	I0924 01:08:20.123642   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.123653   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:20.123678   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:20.123745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:20.165885   61989 cri.go:89] found id: ""
	I0924 01:08:20.165913   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.165926   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:20.165934   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:20.165985   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:20.199300   61989 cri.go:89] found id: ""
	I0924 01:08:20.199329   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.199341   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:20.199348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:20.199415   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:20.237201   61989 cri.go:89] found id: ""
	I0924 01:08:20.237253   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.237262   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:20.237271   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:20.237284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:20.285008   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:20.285049   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:20.298974   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:20.299014   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:20.385765   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:20.385793   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:20.385807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:20.460715   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:20.460752   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.227947   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.228448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.229022   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.527785   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.528095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.528420   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.000163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:23.014755   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:23.014828   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:23.048877   61989 cri.go:89] found id: ""
	I0924 01:08:23.048909   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.048921   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:23.048979   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:23.049049   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:23.085614   61989 cri.go:89] found id: ""
	I0924 01:08:23.085643   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.085650   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:23.085658   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:23.085718   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:23.122027   61989 cri.go:89] found id: ""
	I0924 01:08:23.122060   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.122071   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:23.122078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:23.122136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:23.156838   61989 cri.go:89] found id: ""
	I0924 01:08:23.156868   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.156879   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:23.156887   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:23.156947   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:23.191528   61989 cri.go:89] found id: ""
	I0924 01:08:23.191569   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.191579   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:23.191586   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:23.191651   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:23.227627   61989 cri.go:89] found id: ""
	I0924 01:08:23.227651   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.227659   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:23.227665   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:23.227709   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:23.261937   61989 cri.go:89] found id: ""
	I0924 01:08:23.261968   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.261980   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:23.261988   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:23.262039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:23.297947   61989 cri.go:89] found id: ""
	I0924 01:08:23.297973   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.297986   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:23.297997   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:23.298009   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.337783   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:23.337811   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:23.390767   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:23.390808   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:23.404787   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:23.404814   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:23.478768   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:23.478788   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:23.478801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:25.728154   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.227795   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:25.529710   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.028153   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:26.060593   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:26.085071   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:26.085137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:26.121785   61989 cri.go:89] found id: ""
	I0924 01:08:26.121814   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.121826   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:26.121834   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:26.121900   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:26.167942   61989 cri.go:89] found id: ""
	I0924 01:08:26.167971   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.167980   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:26.167991   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:26.168054   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:26.206461   61989 cri.go:89] found id: ""
	I0924 01:08:26.206496   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.206506   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:26.206513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:26.206582   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:26.243094   61989 cri.go:89] found id: ""
	I0924 01:08:26.243125   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.243136   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:26.243144   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:26.243206   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:26.279303   61989 cri.go:89] found id: ""
	I0924 01:08:26.279331   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.279341   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:26.279348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:26.279407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:26.311840   61989 cri.go:89] found id: ""
	I0924 01:08:26.311869   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.311880   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:26.311888   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:26.311954   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:26.345994   61989 cri.go:89] found id: ""
	I0924 01:08:26.346019   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.346027   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:26.346033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:26.346082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:26.380570   61989 cri.go:89] found id: ""
	I0924 01:08:26.380601   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.380610   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:26.380619   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:26.380630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:26.429958   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:26.429993   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:26.443278   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:26.443312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:26.516353   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:26.516375   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:26.516390   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.603310   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:26.603345   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.142531   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:29.156548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:29.156634   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:29.191351   61989 cri.go:89] found id: ""
	I0924 01:08:29.191378   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.191389   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:29.191396   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:29.191451   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:29.232112   61989 cri.go:89] found id: ""
	I0924 01:08:29.232141   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.232152   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:29.232159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:29.232214   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:29.266082   61989 cri.go:89] found id: ""
	I0924 01:08:29.266104   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.266112   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:29.266118   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:29.266178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:29.299777   61989 cri.go:89] found id: ""
	I0924 01:08:29.299802   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.299812   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:29.299817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:29.299883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:29.342709   61989 cri.go:89] found id: ""
	I0924 01:08:29.342740   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.342749   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:29.342756   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:29.342816   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:29.381255   61989 cri.go:89] found id: ""
	I0924 01:08:29.381303   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.381312   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:29.381318   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:29.381375   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:29.414998   61989 cri.go:89] found id: ""
	I0924 01:08:29.415028   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.415036   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:29.415043   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:29.415101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:29.448553   61989 cri.go:89] found id: ""
	I0924 01:08:29.448580   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.448589   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:29.448598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:29.448608   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:29.534936   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:29.535001   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.573554   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:29.573584   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:29.623590   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:29.623626   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:29.636141   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:29.636167   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:29.700591   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:30.228993   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.229458   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:30.528150   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:33.029011   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.201184   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:32.215034   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:32.215102   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:32.250990   61989 cri.go:89] found id: ""
	I0924 01:08:32.251016   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.251026   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:32.251033   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:32.251104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:32.284448   61989 cri.go:89] found id: ""
	I0924 01:08:32.284483   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.284494   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:32.284504   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:32.284570   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:32.317979   61989 cri.go:89] found id: ""
	I0924 01:08:32.318004   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.318015   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:32.318022   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:32.318078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:32.352057   61989 cri.go:89] found id: ""
	I0924 01:08:32.352082   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.352093   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:32.352101   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:32.352163   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:32.385459   61989 cri.go:89] found id: ""
	I0924 01:08:32.385482   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.385490   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:32.385496   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:32.385544   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:32.421189   61989 cri.go:89] found id: ""
	I0924 01:08:32.421217   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.421227   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:32.421235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:32.421307   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:32.464375   61989 cri.go:89] found id: ""
	I0924 01:08:32.464399   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.464406   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:32.464412   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:32.464457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:32.512716   61989 cri.go:89] found id: ""
	I0924 01:08:32.512742   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.512753   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:32.512763   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:32.512788   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:32.598271   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.598293   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:32.598305   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:32.674197   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:32.674233   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:32.715065   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:32.715092   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:32.767522   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:32.767565   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.281678   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:35.296302   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:35.296390   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:35.336341   61989 cri.go:89] found id: ""
	I0924 01:08:35.336370   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.336381   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:35.336397   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:35.336454   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:35.373090   61989 cri.go:89] found id: ""
	I0924 01:08:35.373118   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.373127   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:35.373135   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:35.373201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:35.413628   61989 cri.go:89] found id: ""
	I0924 01:08:35.413660   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.413668   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:35.413674   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:35.413720   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:35.446564   61989 cri.go:89] found id: ""
	I0924 01:08:35.446592   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.446603   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:35.446610   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:35.446669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:35.478389   61989 cri.go:89] found id: ""
	I0924 01:08:35.478424   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.478435   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:35.478444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:35.478515   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:35.513992   61989 cri.go:89] found id: ""
	I0924 01:08:35.514015   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.514023   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:35.514029   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:35.514085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:35.556442   61989 cri.go:89] found id: ""
	I0924 01:08:35.556471   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.556481   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:35.556489   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:35.556571   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:35.594205   61989 cri.go:89] found id: ""
	I0924 01:08:35.594228   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.594236   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:35.594244   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:35.594254   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:35.637601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:35.637640   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:35.691674   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:35.691711   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.705223   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:35.705261   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:35.784000   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:35.784021   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:35.784036   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:34.729064   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:37.227314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:35.528382   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.028508   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.370232   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:38.383287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:38.383358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:38.417528   61989 cri.go:89] found id: ""
	I0924 01:08:38.417556   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.417564   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:38.417571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:38.417619   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:38.459788   61989 cri.go:89] found id: ""
	I0924 01:08:38.459814   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.459821   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:38.459828   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:38.459883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:38.494017   61989 cri.go:89] found id: ""
	I0924 01:08:38.494050   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.494059   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:38.494065   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:38.494135   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:38.526894   61989 cri.go:89] found id: ""
	I0924 01:08:38.526924   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.526935   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:38.526942   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:38.527000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:38.563831   61989 cri.go:89] found id: ""
	I0924 01:08:38.563859   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.563876   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:38.563884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:38.563950   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:38.596066   61989 cri.go:89] found id: ""
	I0924 01:08:38.596095   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.596106   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:38.596114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:38.596172   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:38.630123   61989 cri.go:89] found id: ""
	I0924 01:08:38.630147   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.630157   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:38.630165   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:38.630223   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:38.664714   61989 cri.go:89] found id: ""
	I0924 01:08:38.664743   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.664754   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:38.664765   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:38.664782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:38.718770   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:38.718802   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:38.732878   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:38.732906   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:38.806441   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:38.806469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:38.806485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.884416   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:38.884456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:39.228048   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.228574   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:40.527354   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:42.528592   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.423782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:41.436827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:41.436899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:41.468283   61989 cri.go:89] found id: ""
	I0924 01:08:41.468316   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.468342   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:41.468353   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:41.468412   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:41.504348   61989 cri.go:89] found id: ""
	I0924 01:08:41.504380   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.504402   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:41.504410   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:41.504470   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:41.544785   61989 cri.go:89] found id: ""
	I0924 01:08:41.544809   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.544818   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:41.544825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:41.544883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:41.582924   61989 cri.go:89] found id: ""
	I0924 01:08:41.582954   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.582965   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:41.582973   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:41.583037   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:41.618220   61989 cri.go:89] found id: ""
	I0924 01:08:41.618243   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.618253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:41.618260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:41.618329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:41.653369   61989 cri.go:89] found id: ""
	I0924 01:08:41.653392   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.653400   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:41.653416   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:41.653477   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:41.687036   61989 cri.go:89] found id: ""
	I0924 01:08:41.687058   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.687069   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:41.687077   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:41.687133   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:41.720701   61989 cri.go:89] found id: ""
	I0924 01:08:41.720732   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.720744   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:41.720756   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:41.720776   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:41.798436   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:41.798486   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.842639   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:41.842674   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:41.893053   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:41.893086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:41.907757   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:41.907784   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:41.973466   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.474071   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:44.487057   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:44.487119   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:44.521772   61989 cri.go:89] found id: ""
	I0924 01:08:44.521813   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.521835   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:44.521843   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:44.521905   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:44.554928   61989 cri.go:89] found id: ""
	I0924 01:08:44.554956   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.554966   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:44.554977   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:44.555042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:44.594246   61989 cri.go:89] found id: ""
	I0924 01:08:44.594279   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.594292   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:44.594298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:44.594344   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:44.629779   61989 cri.go:89] found id: ""
	I0924 01:08:44.629807   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.629819   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:44.629827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:44.629884   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:44.671671   61989 cri.go:89] found id: ""
	I0924 01:08:44.671694   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.671701   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:44.671707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:44.671772   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:44.710875   61989 cri.go:89] found id: ""
	I0924 01:08:44.710910   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.710922   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:44.710931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:44.711000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:44.744345   61989 cri.go:89] found id: ""
	I0924 01:08:44.744381   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.744389   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:44.744395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:44.744442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:44.780771   61989 cri.go:89] found id: ""
	I0924 01:08:44.780797   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.780804   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:44.780812   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:44.780824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:44.834902   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:44.834958   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:44.848503   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:44.848540   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:44.923117   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.923138   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:44.923150   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:45.003806   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:45.003840   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.184585   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.282824063s)
	I0924 01:08:46.184659   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:46.201715   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:46.215637   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:46.228701   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:46.228726   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:46.228769   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:46.239005   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:46.239065   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:46.250336   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:46.259889   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:46.259961   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:46.271773   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.283106   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:46.283175   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.293325   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:46.306026   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:46.306111   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:46.318859   61323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:46.373819   61323 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:08:46.373973   61323 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:46.487006   61323 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:46.487146   61323 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:46.487299   61323 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:08:46.495557   61323 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:46.497537   61323 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:46.497645   61323 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:46.497732   61323 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:46.497853   61323 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:46.497946   61323 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:46.498041   61323 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:46.498116   61323 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:46.498197   61323 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:46.498280   61323 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:46.498389   61323 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:46.498490   61323 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:46.498547   61323 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:46.498623   61323 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:46.714556   61323 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:46.815030   61323 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:08:47.011082   61323 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:47.227052   61323 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:47.488776   61323 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:47.489403   61323 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:47.491864   61323 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:43.728646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:46.234412   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029064   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029109   61699 pod_ready.go:82] duration metric: took 4m0.007887151s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:45.029124   61699 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:08:45.029133   61699 pod_ready.go:39] duration metric: took 4m5.860472412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:45.029153   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:08:45.029189   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:45.029267   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:45.084875   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:45.084899   61699 cri.go:89] found id: ""
	I0924 01:08:45.084907   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:45.084955   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.089534   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:45.089601   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:45.133457   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:45.133479   61699 cri.go:89] found id: ""
	I0924 01:08:45.133486   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:45.133544   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.137523   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:45.137586   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:45.173989   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.174014   61699 cri.go:89] found id: ""
	I0924 01:08:45.174023   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:45.174083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.178084   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:45.178168   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:45.215763   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:45.215790   61699 cri.go:89] found id: ""
	I0924 01:08:45.215799   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:45.215851   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.220052   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:45.220137   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:45.258186   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.258206   61699 cri.go:89] found id: ""
	I0924 01:08:45.258213   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:45.258272   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.262402   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:45.262481   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:45.299355   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.299385   61699 cri.go:89] found id: ""
	I0924 01:08:45.299397   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:45.299452   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.303404   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:45.303505   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:45.341412   61699 cri.go:89] found id: ""
	I0924 01:08:45.341438   61699 logs.go:276] 0 containers: []
	W0924 01:08:45.341446   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:45.341452   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:45.341508   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:45.377419   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:45.377450   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:45.377457   61699 cri.go:89] found id: ""
	I0924 01:08:45.377471   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:45.377539   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.381497   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.385182   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:45.385201   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:45.455618   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:45.455661   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.495007   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:45.495037   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.530196   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:45.530230   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.581671   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:45.581709   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:46.122674   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:46.122717   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.169928   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:46.169965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:46.184617   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:46.184645   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:46.330482   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:46.330512   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:46.382927   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:46.382965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:46.441408   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:46.441442   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:46.484956   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:46.484985   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:46.522559   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:46.522595   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.064954   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:49.086621   61699 api_server.go:72] duration metric: took 4m15.650065328s to wait for apiserver process to appear ...
	I0924 01:08:49.086648   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:08:49.086695   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:49.086760   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:47.541843   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:47.555428   61989 kubeadm.go:597] duration metric: took 4m2.297219084s to restartPrimaryControlPlane
	W0924 01:08:47.555528   61989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:47.555560   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:49.123410   61989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.567825503s)
	I0924 01:08:49.123501   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:49.142686   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:49.154484   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:49.166734   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:49.166759   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:49.166813   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:49.178374   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:49.178517   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:49.188871   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:49.200190   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:49.200258   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:49.212895   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.225205   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:49.225276   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.237828   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:49.249686   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:49.249751   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:49.262505   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:49.338624   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:08:49.338712   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:49.509271   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:49.509489   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:49.509636   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:08:49.724434   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:47.494323   61323 out.go:235]   - Booting up control plane ...
	I0924 01:08:47.494449   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:47.494527   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:47.494904   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:47.511889   61323 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:47.518272   61323 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:47.518343   61323 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:47.654121   61323 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:08:47.654273   61323 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:08:48.156008   61323 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075879ms
	I0924 01:08:48.156089   61323 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:08:49.726458   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:49.726563   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:49.726639   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:49.726737   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:49.726812   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:49.727078   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:49.727375   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:49.728123   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:49.729254   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:49.730178   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:49.732548   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:49.732604   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:49.732676   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:49.938623   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:50.774207   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:51.022535   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:51.148690   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:51.168786   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:51.170070   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:51.170150   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:51.342671   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:48.729168   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:50.729197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:52.729615   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:53.660805   61323 kubeadm.go:310] [api-check] The API server is healthy after 5.502700892s
	I0924 01:08:53.678006   61323 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:08:53.693676   61323 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:08:53.736910   61323 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:08:53.737186   61323 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-650507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:08:53.750738   61323 kubeadm.go:310] [bootstrap-token] Using token: 62empn.zvptxpa69xtioeo1
	I0924 01:08:49.139835   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.139859   61699 cri.go:89] found id: ""
	I0924 01:08:49.139869   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:49.139920   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.144770   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:49.144896   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:49.193710   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:49.193733   61699 cri.go:89] found id: ""
	I0924 01:08:49.193743   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:49.193798   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.198090   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:49.198178   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:49.240236   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:49.240309   61699 cri.go:89] found id: ""
	I0924 01:08:49.240344   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:49.240401   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.244573   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:49.244646   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:49.290954   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:49.290998   61699 cri.go:89] found id: ""
	I0924 01:08:49.291008   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:49.291083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.295602   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:49.295664   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:49.340871   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.340894   61699 cri.go:89] found id: ""
	I0924 01:08:49.340904   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:49.340964   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.345362   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:49.345433   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:49.387382   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.387408   61699 cri.go:89] found id: ""
	I0924 01:08:49.387418   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:49.387472   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.393388   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:49.393468   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:49.436082   61699 cri.go:89] found id: ""
	I0924 01:08:49.436107   61699 logs.go:276] 0 containers: []
	W0924 01:08:49.436119   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:49.436126   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:49.436187   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:49.490172   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:49.490197   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.490203   61699 cri.go:89] found id: ""
	I0924 01:08:49.490213   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:49.490273   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.495438   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.500506   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:49.500537   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.561240   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:49.561277   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.611765   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:49.611807   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.689366   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:49.689413   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:49.747233   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:49.747271   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:49.852723   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:49.852771   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:50.006274   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:50.006322   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:50.064786   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:50.064828   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:50.104831   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:50.104865   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:50.144962   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:50.144990   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:50.183923   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:50.183956   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:50.222382   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:50.222414   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:50.671849   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:50.671890   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.187450   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:08:53.193094   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:08:53.194414   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:08:53.194439   61699 api_server.go:131] duration metric: took 4.107783011s to wait for apiserver health ...
	I0924 01:08:53.194449   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:08:53.194479   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:53.194546   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:53.232560   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:53.232584   61699 cri.go:89] found id: ""
	I0924 01:08:53.232594   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:53.232649   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.236611   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:53.236671   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:53.278194   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.278223   61699 cri.go:89] found id: ""
	I0924 01:08:53.278233   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:53.278291   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.283330   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:53.283391   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:53.322371   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.322399   61699 cri.go:89] found id: ""
	I0924 01:08:53.322408   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:53.322459   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.326794   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:53.326869   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:53.364035   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.364064   61699 cri.go:89] found id: ""
	I0924 01:08:53.364075   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:53.364140   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.368065   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:53.368127   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:53.405651   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.405679   61699 cri.go:89] found id: ""
	I0924 01:08:53.405687   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:53.405754   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.410451   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:53.410537   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:53.451079   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:53.451111   61699 cri.go:89] found id: ""
	I0924 01:08:53.451121   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:53.451183   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.456272   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:53.456367   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:53.497323   61699 cri.go:89] found id: ""
	I0924 01:08:53.497360   61699 logs.go:276] 0 containers: []
	W0924 01:08:53.497373   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:53.497387   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:53.497461   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:53.536017   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:53.536040   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:53.536046   61699 cri.go:89] found id: ""
	I0924 01:08:53.536055   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:53.536114   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.542413   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.546559   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:53.546592   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.560292   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:53.560323   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:53.684820   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:53.684849   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.734483   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:53.734519   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.780676   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:53.780705   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:53.855917   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:53.855960   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.906926   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:53.906962   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.953992   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:53.954019   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:54.031302   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:54.031350   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:54.073918   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:54.073958   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:54.108724   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:54.108765   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:53.752460   61323 out.go:235]   - Configuring RBAC rules ...
	I0924 01:08:53.752626   61323 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:08:53.758889   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:08:53.767101   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:08:53.770943   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:08:53.775335   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:08:53.792963   61323 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:08:54.070193   61323 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:08:54.526226   61323 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:08:55.069668   61323 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:08:55.070678   61323 kubeadm.go:310] 
	I0924 01:08:55.070751   61323 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:08:55.070761   61323 kubeadm.go:310] 
	I0924 01:08:55.070844   61323 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:08:55.070860   61323 kubeadm.go:310] 
	I0924 01:08:55.070910   61323 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:08:55.070998   61323 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:08:55.071064   61323 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:08:55.071074   61323 kubeadm.go:310] 
	I0924 01:08:55.071138   61323 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:08:55.071159   61323 kubeadm.go:310] 
	I0924 01:08:55.071210   61323 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:08:55.071217   61323 kubeadm.go:310] 
	I0924 01:08:55.071298   61323 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:08:55.071428   61323 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:08:55.071525   61323 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:08:55.071535   61323 kubeadm.go:310] 
	I0924 01:08:55.071640   61323 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:08:55.071721   61323 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:08:55.071738   61323 kubeadm.go:310] 
	I0924 01:08:55.071815   61323 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.071941   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:08:55.071971   61323 kubeadm.go:310] 	--control-plane 
	I0924 01:08:55.071984   61323 kubeadm.go:310] 
	I0924 01:08:55.072089   61323 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:08:55.072098   61323 kubeadm.go:310] 
	I0924 01:08:55.072198   61323 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.072324   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:08:55.073807   61323 kubeadm.go:310] W0924 01:08:46.350959    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074118   61323 kubeadm.go:310] W0924 01:08:46.352529    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074256   61323 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:08:55.074295   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:08:55.074312   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:08:55.076241   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:08:55.077630   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:08:55.088658   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:08:55.106396   61323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:08:55.106491   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.106579   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-650507 minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=embed-certs-650507 minikube.k8s.io/primary=true
	I0924 01:08:55.138376   61323 ops.go:34] apiserver oom_adj: -16
	I0924 01:08:51.344458   61989 out.go:235]   - Booting up control plane ...
	I0924 01:08:51.344607   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:51.353468   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:51.356949   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:51.358082   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:51.364468   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:08:54.501805   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:54.501847   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:54.548768   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:54.548800   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:57.105661   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:08:57.105688   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.105693   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.105697   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.105703   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.105706   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.105709   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.105715   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.105722   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.105729   61699 system_pods.go:74] duration metric: took 3.911274774s to wait for pod list to return data ...
	I0924 01:08:57.105736   61699 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:08:57.108031   61699 default_sa.go:45] found service account: "default"
	I0924 01:08:57.108051   61699 default_sa.go:55] duration metric: took 2.307712ms for default service account to be created ...
	I0924 01:08:57.108059   61699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:08:57.112551   61699 system_pods.go:86] 8 kube-system pods found
	I0924 01:08:57.112578   61699 system_pods.go:89] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.112584   61699 system_pods.go:89] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.112589   61699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.112593   61699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.112597   61699 system_pods.go:89] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.112600   61699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.112608   61699 system_pods.go:89] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.112613   61699 system_pods.go:89] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.112619   61699 system_pods.go:126] duration metric: took 4.555185ms to wait for k8s-apps to be running ...
	I0924 01:08:57.112625   61699 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:08:57.112665   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:57.127805   61699 system_svc.go:56] duration metric: took 15.170368ms WaitForService to wait for kubelet
	I0924 01:08:57.127839   61699 kubeadm.go:582] duration metric: took 4m23.691287248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:08:57.127865   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:08:57.130964   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:08:57.130994   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:08:57.131008   61699 node_conditions.go:105] duration metric: took 3.13793ms to run NodePressure ...
	I0924 01:08:57.131021   61699 start.go:241] waiting for startup goroutines ...
	I0924 01:08:57.131029   61699 start.go:246] waiting for cluster config update ...
	I0924 01:08:57.131043   61699 start.go:255] writing updated cluster config ...
	I0924 01:08:57.131388   61699 ssh_runner.go:195] Run: rm -f paused
	I0924 01:08:57.182238   61699 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:08:57.185023   61699 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-465341" cluster and "default" namespace by default
	I0924 01:08:55.229370   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:57.729448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:55.285390   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.785813   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.285570   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.785779   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.285599   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.786401   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.285583   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.786037   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.286404   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.447075   61323 kubeadm.go:1113] duration metric: took 4.340646509s to wait for elevateKubeSystemPrivileges
	I0924 01:08:59.447119   61323 kubeadm.go:394] duration metric: took 4m57.777127509s to StartCluster
	I0924 01:08:59.447141   61323 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.447229   61323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:08:59.449766   61323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.450091   61323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:08:59.450191   61323 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:08:59.450308   61323 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650507"
	I0924 01:08:59.450330   61323 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-650507"
	W0924 01:08:59.450343   61323 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:08:59.450346   61323 addons.go:69] Setting metrics-server=true in profile "embed-certs-650507"
	I0924 01:08:59.450349   61323 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650507"
	I0924 01:08:59.450366   61323 addons.go:234] Setting addon metrics-server=true in "embed-certs-650507"
	W0924 01:08:59.450374   61323 addons.go:243] addon metrics-server should already be in state true
	I0924 01:08:59.450328   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:08:59.450381   61323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650507"
	I0924 01:08:59.450404   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450375   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450718   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450770   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450805   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450808   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450895   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450842   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.451862   61323 out.go:177] * Verifying Kubernetes components...
	I0924 01:08:59.453214   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:08:59.471878   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0924 01:08:59.472083   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0924 01:08:59.472239   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0924 01:08:59.472586   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472704   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472988   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.473187   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473205   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473226   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473242   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473418   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473433   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474003   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.474116   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474383   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474422   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.474591   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474628   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.478726   61323 addons.go:234] Setting addon default-storageclass=true in "embed-certs-650507"
	W0924 01:08:59.478753   61323 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:08:59.478784   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.479137   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.479186   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.495021   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I0924 01:08:59.495527   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.496068   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.496090   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.496519   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.496734   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.498472   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.498528   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0924 01:08:59.498971   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.499485   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.499498   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.499794   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.499918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.500899   61323 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:08:59.501731   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.502154   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:08:59.502172   61323 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:08:59.502186   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.503238   61323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:08:59.504765   61323 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.504783   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:08:59.504801   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.505483   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0924 01:08:59.505882   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.506386   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.506408   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.506841   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.507463   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.507505   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.511098   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511611   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.511645   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511944   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.512127   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.512296   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.512493   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.514595   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515144   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.515173   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515481   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.515749   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.515963   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.516100   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.529920   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0924 01:08:59.530565   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.531102   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.531125   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.531629   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.531918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.533741   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.533992   61323 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.534007   61323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:08:59.534026   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.537032   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537488   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.537515   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537713   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.537919   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.538074   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.538198   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.680683   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:08:59.711414   61323 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721234   61323 node_ready.go:49] node "embed-certs-650507" has status "Ready":"True"
	I0924 01:08:59.721264   61323 node_ready.go:38] duration metric: took 9.820004ms for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721275   61323 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:59.736353   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:08:59.831004   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:08:59.831041   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:08:59.871681   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.873844   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.902662   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:08:59.902691   61323 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:08:59.956425   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:08:59.956454   61323 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:08:59.997902   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:09:01.146340   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.27245536s)
	I0924 01:09:01.146470   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146505   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146403   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.274685832s)
	I0924 01:09:01.146582   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146602   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146819   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146848   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146868   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146875   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.146882   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146892   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146967   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146990   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147007   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.147023   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.147084   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.147117   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147133   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147370   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147378   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180574   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.180604   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.180929   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180977   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.180986   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.207538   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209569759s)
	I0924 01:09:01.207600   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.207616   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.207959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.208002   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208011   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208019   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.208028   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.208377   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208402   61323 addons.go:475] Verifying addon metrics-server=true in "embed-certs-650507"
	I0924 01:09:01.208411   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.210500   61323 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:08:59.731184   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:02.229737   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:01.211900   61323 addons.go:510] duration metric: took 1.761718139s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:09:01.751463   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.242260   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.728708   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.728878   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.243002   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:06.243030   61323 pod_ready.go:82] duration metric: took 6.506649267s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:06.243039   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:08.249949   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:09.750009   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.750037   61323 pod_ready.go:82] duration metric: took 3.506990291s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.750049   61323 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756600   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.756626   61323 pod_ready.go:82] duration metric: took 6.570047ms for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756635   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762209   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.762235   61323 pod_ready.go:82] duration metric: took 5.593257ms for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762248   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772052   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.772075   61323 pod_ready.go:82] duration metric: took 9.818627ms for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772088   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777733   61323 pod_ready.go:93] pod "kube-proxy-mwtkg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.777765   61323 pod_ready.go:82] duration metric: took 5.669609ms for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777778   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146804   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:10.146833   61323 pod_ready.go:82] duration metric: took 369.046066ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146844   61323 pod_ready.go:39] duration metric: took 10.425557831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:09:10.146861   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:09:10.146918   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:09:10.162335   61323 api_server.go:72] duration metric: took 10.712204486s to wait for apiserver process to appear ...
	I0924 01:09:10.162360   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:09:10.162381   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:09:10.166693   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:09:10.167700   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:09:10.167723   61323 api_server.go:131] duration metric: took 5.355716ms to wait for apiserver health ...
	I0924 01:09:10.167734   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:09:10.351584   61323 system_pods.go:59] 9 kube-system pods found
	I0924 01:09:10.351621   61323 system_pods.go:61] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.351629   61323 system_pods.go:61] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.351634   61323 system_pods.go:61] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.351640   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.351645   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.351650   61323 system_pods.go:61] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.351655   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.351669   61323 system_pods.go:61] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.351678   61323 system_pods.go:61] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.351692   61323 system_pods.go:74] duration metric: took 183.950994ms to wait for pod list to return data ...
	I0924 01:09:10.351704   61323 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:09:10.547564   61323 default_sa.go:45] found service account: "default"
	I0924 01:09:10.547595   61323 default_sa.go:55] duration metric: took 195.882659ms for default service account to be created ...
	I0924 01:09:10.547605   61323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:09:10.750290   61323 system_pods.go:86] 9 kube-system pods found
	I0924 01:09:10.750327   61323 system_pods.go:89] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.750336   61323 system_pods.go:89] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.750344   61323 system_pods.go:89] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.750352   61323 system_pods.go:89] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.750359   61323 system_pods.go:89] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.750366   61323 system_pods.go:89] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.750372   61323 system_pods.go:89] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.750382   61323 system_pods.go:89] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.750391   61323 system_pods.go:89] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.750407   61323 system_pods.go:126] duration metric: took 202.795975ms to wait for k8s-apps to be running ...
	I0924 01:09:10.750416   61323 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:09:10.750476   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:09:10.765539   61323 system_svc.go:56] duration metric: took 15.112281ms WaitForService to wait for kubelet
	I0924 01:09:10.765569   61323 kubeadm.go:582] duration metric: took 11.31544538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:09:10.765588   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:09:10.947628   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:09:10.947654   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:09:10.947664   61323 node_conditions.go:105] duration metric: took 182.072269ms to run NodePressure ...
	I0924 01:09:10.947674   61323 start.go:241] waiting for startup goroutines ...
	I0924 01:09:10.947681   61323 start.go:246] waiting for cluster config update ...
	I0924 01:09:10.947691   61323 start.go:255] writing updated cluster config ...
	I0924 01:09:10.947955   61323 ssh_runner.go:195] Run: rm -f paused
	I0924 01:09:10.999208   61323 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:09:11.001392   61323 out.go:177] * Done! kubectl is now configured to use "embed-certs-650507" cluster and "default" namespace by default
	I0924 01:09:08.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:11.229036   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:13.727852   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:16.229362   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:18.727643   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:20.729183   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:22.731323   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:25.228514   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:27.728747   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:29.729150   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:32.228197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:31.365725   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:09:31.366444   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:31.366704   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:34.729441   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:37.228766   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:36.367209   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:36.367654   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:39.728035   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:41.729148   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:43.729240   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.228006   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:48.228134   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.367945   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:46.368128   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:50.228455   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:52.228646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:54.229158   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:56.727712   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:58.728522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:00.728964   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:02.729909   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:05.227781   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:07.228729   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:06.368912   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:06.369182   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:09.728977   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:10.222284   61070 pod_ready.go:82] duration metric: took 4m0.000274516s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:10:10.222354   61070 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:10:10.222381   61070 pod_ready.go:39] duration metric: took 4m12.043944079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:10.222412   61070 kubeadm.go:597] duration metric: took 4m56.454037737s to restartPrimaryControlPlane
	W0924 01:10:10.222488   61070 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:10:10.222536   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:36.533302   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.310734731s)
	I0924 01:10:36.533377   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:36.556961   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:10:36.568298   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:36.584098   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:36.584121   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:36.584178   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:36.594153   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:36.594218   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:36.612646   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:36.626433   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:36.626506   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:36.636161   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.654017   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:36.654075   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.663760   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:36.673737   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:36.673799   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:36.684005   61070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:36.731568   61070 kubeadm.go:310] W0924 01:10:36.713557    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.733592   61070 kubeadm.go:310] W0924 01:10:36.715675    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.850767   61070 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:45.349295   61070 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:10:45.349386   61070 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:45.349486   61070 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:45.349600   61070 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:45.349733   61070 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:10:45.349836   61070 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:45.351746   61070 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:45.351843   61070 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:45.351939   61070 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:45.352055   61070 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:45.352160   61070 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:45.352228   61070 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:45.352297   61070 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:45.352392   61070 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:45.352477   61070 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:45.352551   61070 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:45.352664   61070 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:45.352734   61070 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:45.352904   61070 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:45.352956   61070 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:45.353038   61070 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:10:45.353127   61070 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:45.353209   61070 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:45.353300   61070 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:45.353372   61070 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:45.353446   61070 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.354948   61070 out.go:235]   - Booting up control plane ...
	I0924 01:10:45.355023   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:45.355090   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:45.355144   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:45.355226   61070 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:45.355310   61070 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:45.355356   61070 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:45.355476   61070 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:10:45.355585   61070 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:10:45.355658   61070 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537437s
	I0924 01:10:45.355728   61070 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:10:45.355807   61070 kubeadm.go:310] [api-check] The API server is healthy after 5.002387582s
	I0924 01:10:45.355955   61070 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:10:45.356129   61070 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:10:45.356230   61070 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:10:45.356516   61070 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-674057 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:10:45.356571   61070 kubeadm.go:310] [bootstrap-token] Using token: g2v97n.iz49hjb4wh5cfkiq
	I0924 01:10:45.358203   61070 out.go:235]   - Configuring RBAC rules ...
	I0924 01:10:45.358333   61070 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:10:45.358439   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:10:45.358562   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:10:45.358667   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:10:45.358773   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:10:45.358851   61070 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:10:45.358997   61070 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:10:45.359059   61070 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:10:45.359101   61070 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:10:45.359111   61070 kubeadm.go:310] 
	I0924 01:10:45.359164   61070 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:10:45.359171   61070 kubeadm.go:310] 
	I0924 01:10:45.359263   61070 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:10:45.359280   61070 kubeadm.go:310] 
	I0924 01:10:45.359309   61070 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:10:45.359387   61070 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:10:45.359458   61070 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:10:45.359471   61070 kubeadm.go:310] 
	I0924 01:10:45.359559   61070 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:10:45.359568   61070 kubeadm.go:310] 
	I0924 01:10:45.359613   61070 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:10:45.359619   61070 kubeadm.go:310] 
	I0924 01:10:45.359665   61070 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:10:45.359728   61070 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:10:45.359800   61070 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:10:45.359813   61070 kubeadm.go:310] 
	I0924 01:10:45.359879   61070 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:10:45.359978   61070 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:10:45.359996   61070 kubeadm.go:310] 
	I0924 01:10:45.360101   61070 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360218   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:10:45.360251   61070 kubeadm.go:310] 	--control-plane 
	I0924 01:10:45.360258   61070 kubeadm.go:310] 
	I0924 01:10:45.360453   61070 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:10:45.360481   61070 kubeadm.go:310] 
	I0924 01:10:45.360595   61070 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360693   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:10:45.360706   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:10:45.360713   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:10:45.362153   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:10:46.371109   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:46.371309   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371318   61989 kubeadm.go:310] 
	I0924 01:10:46.371352   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:10:46.371455   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:10:46.371490   61989 kubeadm.go:310] 
	I0924 01:10:46.371546   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:10:46.371592   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:10:46.371734   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:10:46.371751   61989 kubeadm.go:310] 
	I0924 01:10:46.371888   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:10:46.371936   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:10:46.371978   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:10:46.371988   61989 kubeadm.go:310] 
	I0924 01:10:46.372124   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:10:46.372253   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:10:46.372262   61989 kubeadm.go:310] 
	I0924 01:10:46.372442   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:10:46.372578   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:10:46.372680   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:10:46.372756   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:10:46.372765   61989 kubeadm.go:310] 
	I0924 01:10:46.373578   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:46.373675   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:10:46.373790   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 01:10:46.373938   61989 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 01:10:46.373987   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:46.834432   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:46.851214   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:46.862648   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:46.862675   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:46.862733   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:46.873005   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:46.873073   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:46.884007   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:46.893944   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:46.894016   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:46.905036   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.914953   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:46.915024   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.924881   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:46.935132   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:46.935192   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:46.945837   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:47.018713   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:10:47.018861   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:47.159920   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:47.160042   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:47.160168   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:10:47.349360   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:47.351645   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:47.351763   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:47.351918   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:47.352040   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:47.352118   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:47.352205   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:47.352298   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:47.352401   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:47.352481   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:47.352574   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:47.352662   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:47.352705   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:47.352767   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:47.467301   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:47.622085   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:47.726807   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:47.951249   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:47.973392   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:47.974396   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:47.974440   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:48.127629   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.363348   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:10:45.374505   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:10:45.391838   61070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:10:45.391947   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:45.391999   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-674057 minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=no-preload-674057 minikube.k8s.io/primary=true
	I0924 01:10:45.583482   61070 ops.go:34] apiserver oom_adj: -16
	I0924 01:10:45.583498   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.083831   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.583990   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.084184   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.583925   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.083775   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.583654   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.084305   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.584636   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.084620   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.226320   61070 kubeadm.go:1113] duration metric: took 4.834429832s to wait for elevateKubeSystemPrivileges
	I0924 01:10:50.226363   61070 kubeadm.go:394] duration metric: took 5m36.514145334s to StartCluster
	I0924 01:10:50.226386   61070 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.226480   61070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:10:50.229196   61070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.229530   61070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:10:50.229600   61070 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:10:50.229703   61070 addons.go:69] Setting storage-provisioner=true in profile "no-preload-674057"
	I0924 01:10:50.229725   61070 addons.go:234] Setting addon storage-provisioner=true in "no-preload-674057"
	W0924 01:10:50.229733   61070 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:10:50.229735   61070 addons.go:69] Setting default-storageclass=true in profile "no-preload-674057"
	I0924 01:10:50.229756   61070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-674057"
	I0924 01:10:50.229764   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.229789   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:10:50.229781   61070 addons.go:69] Setting metrics-server=true in profile "no-preload-674057"
	I0924 01:10:50.229847   61070 addons.go:234] Setting addon metrics-server=true in "no-preload-674057"
	W0924 01:10:50.229855   61070 addons.go:243] addon metrics-server should already be in state true
	I0924 01:10:50.229871   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.230228   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230268   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230320   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230351   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230355   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230389   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.231531   61070 out.go:177] * Verifying Kubernetes components...
	I0924 01:10:50.233506   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:10:50.252485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0924 01:10:50.252844   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0924 01:10:50.253068   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.253217   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0924 01:10:50.253653   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.253676   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.253705   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254050   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254203   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254236   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254250   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.254591   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254814   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.254829   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254851   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.254864   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.254984   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.255389   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.255983   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.256028   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.258757   61070 addons.go:234] Setting addon default-storageclass=true in "no-preload-674057"
	W0924 01:10:50.258781   61070 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:10:50.258861   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.259186   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.259237   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.276636   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0924 01:10:50.276806   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0924 01:10:50.277196   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277312   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277771   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.277795   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278022   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.278044   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278213   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278380   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.278485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0924 01:10:50.278806   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278877   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.279106   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.279244   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.279260   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.279668   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.280215   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.280263   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.280315   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.281796   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.282123   61070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:10:50.283561   61070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:10:48.129312   61989 out.go:235]   - Booting up control plane ...
	I0924 01:10:48.129446   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:48.139821   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:48.143120   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:48.144038   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:48.146275   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:10:50.283656   61070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.283674   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:10:50.283688   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.284778   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:10:50.284793   61070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:10:50.284811   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.288106   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288477   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.288498   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288524   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288679   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.288867   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289019   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.289185   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.289309   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.289338   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.289613   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.289773   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289938   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.290073   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.323722   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0924 01:10:50.324221   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.324873   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.324901   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.325334   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.325572   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.327779   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.328071   61070 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.328092   61070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:10:50.328119   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.331721   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.331988   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.332022   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.332209   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.332455   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.332658   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.332837   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.471507   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:10:50.502289   61070 node_ready.go:35] waiting up to 6m0s for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522752   61070 node_ready.go:49] node "no-preload-674057" has status "Ready":"True"
	I0924 01:10:50.522784   61070 node_ready.go:38] duration metric: took 20.46398ms for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522797   61070 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:50.537297   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:50.576703   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.638655   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:10:50.638679   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:10:50.673535   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.691443   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:10:50.691475   61070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:10:50.791572   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:50.791596   61070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:10:50.887143   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:51.535179   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535211   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535247   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535269   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535531   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535553   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535563   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535571   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535572   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535584   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535591   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535598   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535809   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535830   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.536069   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.536078   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.536088   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.563511   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.563537   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.563856   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.563880   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.800860   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.800889   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801192   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801211   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801224   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.801233   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801527   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.801559   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801567   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801577   61070 addons.go:475] Verifying addon metrics-server=true in "no-preload-674057"
	I0924 01:10:51.803735   61070 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:10:51.805581   61070 addons.go:510] duration metric: took 1.575985263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:10:52.544028   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:53.564056   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.564089   61070 pod_ready.go:82] duration metric: took 3.026767371s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.564102   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573039   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.573076   61070 pod_ready.go:82] duration metric: took 8.965144ms for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573090   61070 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081080   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.081105   61070 pod_ready.go:82] duration metric: took 508.007072ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081115   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087054   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.087079   61070 pod_ready.go:82] duration metric: took 5.957569ms for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087091   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094018   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.094043   61070 pod_ready.go:82] duration metric: took 6.944048ms for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094053   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341307   61070 pod_ready.go:93] pod "kube-proxy-k54d7" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.341326   61070 pod_ready.go:82] duration metric: took 247.267987ms for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341335   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741702   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.741732   61070 pod_ready.go:82] duration metric: took 400.389532ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741742   61070 pod_ready.go:39] duration metric: took 4.218931841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:54.741759   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:10:54.741827   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:10:54.758692   61070 api_server.go:72] duration metric: took 4.529120201s to wait for apiserver process to appear ...
	I0924 01:10:54.758723   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:10:54.758744   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:10:54.764587   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:10:54.765620   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:10:54.765639   61070 api_server.go:131] duration metric: took 6.909845ms to wait for apiserver health ...
	I0924 01:10:54.765646   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:10:54.945080   61070 system_pods.go:59] 9 kube-system pods found
	I0924 01:10:54.945121   61070 system_pods.go:61] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:54.945128   61070 system_pods.go:61] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:54.945134   61070 system_pods.go:61] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:54.945140   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:54.945145   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:54.945150   61070 system_pods.go:61] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:54.945161   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:54.945172   61070 system_pods.go:61] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:54.945180   61070 system_pods.go:61] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:54.945191   61070 system_pods.go:74] duration metric: took 179.539019ms to wait for pod list to return data ...
	I0924 01:10:54.945205   61070 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:10:55.141944   61070 default_sa.go:45] found service account: "default"
	I0924 01:10:55.141973   61070 default_sa.go:55] duration metric: took 196.760922ms for default service account to be created ...
	I0924 01:10:55.141984   61070 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:10:55.344235   61070 system_pods.go:86] 9 kube-system pods found
	I0924 01:10:55.344273   61070 system_pods.go:89] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:55.344282   61070 system_pods.go:89] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:55.344288   61070 system_pods.go:89] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:55.344293   61070 system_pods.go:89] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:55.344297   61070 system_pods.go:89] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:55.344301   61070 system_pods.go:89] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:55.344304   61070 system_pods.go:89] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:55.344310   61070 system_pods.go:89] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:55.344315   61070 system_pods.go:89] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:55.344324   61070 system_pods.go:126] duration metric: took 202.334823ms to wait for k8s-apps to be running ...
	I0924 01:10:55.344361   61070 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:10:55.344406   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:55.361050   61070 system_svc.go:56] duration metric: took 16.6812ms WaitForService to wait for kubelet
	I0924 01:10:55.361082   61070 kubeadm.go:582] duration metric: took 5.13151522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:10:55.361104   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:10:55.541766   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:10:55.541799   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:10:55.541812   61070 node_conditions.go:105] duration metric: took 180.702708ms to run NodePressure ...
	I0924 01:10:55.541826   61070 start.go:241] waiting for startup goroutines ...
	I0924 01:10:55.541837   61070 start.go:246] waiting for cluster config update ...
	I0924 01:10:55.541850   61070 start.go:255] writing updated cluster config ...
	I0924 01:10:55.542100   61070 ssh_runner.go:195] Run: rm -f paused
	I0924 01:10:55.590629   61070 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:10:55.592850   61070 out.go:177] * Done! kubectl is now configured to use "no-preload-674057" cluster and "default" namespace by default
	I0924 01:11:28.148929   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:11:28.149086   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:28.149360   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:33.150102   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:33.150283   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:43.151281   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:43.151540   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:03.152338   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:03.152562   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151221   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:43.151503   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151532   61989 kubeadm.go:310] 
	I0924 01:12:43.151585   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:12:43.151645   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:12:43.151655   61989 kubeadm.go:310] 
	I0924 01:12:43.151729   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:12:43.151779   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:12:43.151940   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:12:43.151954   61989 kubeadm.go:310] 
	I0924 01:12:43.152095   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:12:43.152154   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:12:43.152201   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:12:43.152207   61989 kubeadm.go:310] 
	I0924 01:12:43.152294   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:12:43.152411   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:12:43.152424   61989 kubeadm.go:310] 
	I0924 01:12:43.152565   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:12:43.152653   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:12:43.152718   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:12:43.152794   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:12:43.152802   61989 kubeadm.go:310] 
	I0924 01:12:43.153600   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:12:43.153699   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:12:43.153757   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 01:12:43.153808   61989 kubeadm.go:394] duration metric: took 7m57.944266289s to StartCluster
	I0924 01:12:43.153845   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:12:43.153894   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:12:43.199866   61989 cri.go:89] found id: ""
	I0924 01:12:43.199896   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.199908   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:12:43.199916   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:12:43.199975   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:12:43.235387   61989 cri.go:89] found id: ""
	I0924 01:12:43.235420   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.235432   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:12:43.235441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:12:43.235513   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:12:43.271255   61989 cri.go:89] found id: ""
	I0924 01:12:43.271290   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.271303   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:12:43.271312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:12:43.271380   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:12:43.305842   61989 cri.go:89] found id: ""
	I0924 01:12:43.305870   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.305882   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:12:43.305891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:12:43.305952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:12:43.341956   61989 cri.go:89] found id: ""
	I0924 01:12:43.341983   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.342005   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:12:43.342013   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:12:43.342093   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:12:43.376362   61989 cri.go:89] found id: ""
	I0924 01:12:43.376399   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.376421   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:12:43.376431   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:12:43.376487   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:12:43.409351   61989 cri.go:89] found id: ""
	I0924 01:12:43.409378   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.409387   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:12:43.409392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:12:43.409459   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:12:43.442446   61989 cri.go:89] found id: ""
	I0924 01:12:43.442479   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.442487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:12:43.442497   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:12:43.442507   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:12:43.498980   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:12:43.499020   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:12:43.520090   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:12:43.520120   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:12:43.612212   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:12:43.612242   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:12:43.612255   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:12:43.727355   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:12:43.727395   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 01:12:43.770163   61989 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 01:12:43.770217   61989 out.go:270] * 
	W0924 01:12:43.770282   61989 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.770297   61989 out.go:270] * 
	W0924 01:12:43.771298   61989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:12:43.775708   61989 out.go:201] 
	W0924 01:12:43.777139   61989 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.777186   61989 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 01:12:43.777214   61989 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 01:12:43.779580   61989 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.160764202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140693160740048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=703cfa2e-dc76-46f6-a565-1f799614568d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.161202110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a4e0c01-579a-49de-bfa6-6e0e47fee153 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.161256811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a4e0c01-579a-49de-bfa6-6e0e47fee153 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.161467264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a4e0c01-579a-49de-bfa6-6e0e47fee153 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.198063826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1fe54b8-ac66-4d02-a0b9-8771654b2329 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.198141428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1fe54b8-ac66-4d02-a0b9-8771654b2329 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.199623608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71ac7631-eeb8-40ab-abd1-93602f686d60 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.200158878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140693200124233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71ac7631-eeb8-40ab-abd1-93602f686d60 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.200830832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c278092-5b93-4983-9823-fa7526c5db55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.200906617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c278092-5b93-4983-9823-fa7526c5db55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.201108698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c278092-5b93-4983-9823-fa7526c5db55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.235702981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84ed173a-11d5-42a6-a861-2308f96981ec name=/runtime.v1.RuntimeService/Version
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.235780672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84ed173a-11d5-42a6-a861-2308f96981ec name=/runtime.v1.RuntimeService/Version
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.237447223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed4f81d3-79c0-4be5-9a68-9865dd593472 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.238032313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140693238008881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed4f81d3-79c0-4be5-9a68-9865dd593472 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.238789401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a5c780e-d2c5-4fae-b4a1-ccf064df9825 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.238841062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a5c780e-d2c5-4fae-b4a1-ccf064df9825 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.239036999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a5c780e-d2c5-4fae-b4a1-ccf064df9825 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.289226408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72a0b7cf-f73b-4cb6-8c58-28f3447bf127 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.289300963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72a0b7cf-f73b-4cb6-8c58-28f3447bf127 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.290973661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5be49b2-8175-48af-b47a-3e2743733437 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.291439039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140693291394720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5be49b2-8175-48af-b47a-3e2743733437 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.292381952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5db821cd-420c-46e9-83df-3e227f763130 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.292440050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5db821cd-420c-46e9-83df-3e227f763130 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:18:13 embed-certs-650507 crio[708]: time="2024-09-24 01:18:13.292698792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5db821cd-420c-46e9-83df-3e227f763130 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	893850a1eae8c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b82ddf390aae7       storage-provisioner
	074b3dcea6a1b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   0fb08ba989a06       coredns-7c65d6cfc9-7295k
	a2b8d78fea47d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   f42ebfa572207       coredns-7c65d6cfc9-r6tcj
	eae4121650134       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   b94c7b554387c       kube-proxy-mwtkg
	890822add546c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   b1ca50f340010       kube-apiserver-embed-certs-650507
	4835c3bf7d1f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   ca3a24420e277       etcd-embed-certs-650507
	ceccfc5326d1f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   23cd8f9efeccb       kube-scheduler-embed-certs-650507
	357d70ef1ae9b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   3d8b126216a2d       kube-controller-manager-embed-certs-650507
	bd8c1d0aaf17e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   7e7cfa3bae812       kube-apiserver-embed-certs-650507
	
	
	==> coredns [074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-650507
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-650507
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=embed-certs-650507
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 01:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-650507
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 01:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 01:14:10 +0000   Tue, 24 Sep 2024 01:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 01:14:10 +0000   Tue, 24 Sep 2024 01:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 01:14:10 +0000   Tue, 24 Sep 2024 01:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 01:14:10 +0000   Tue, 24 Sep 2024 01:08:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    embed-certs-650507
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44e7d2d592684cc6a2e6581d52cb1b33
	  System UUID:                44e7d2d5-9268-4cc6-a2e6-581d52cb1b33
	  Boot ID:                    7e039e3c-94a1-4e52-a044-820a2cf693d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7295k                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-r6tcj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-embed-certs-650507                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-embed-certs-650507             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-650507    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-mwtkg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-embed-certs-650507             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-lbm9h               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node embed-certs-650507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node embed-certs-650507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node embed-certs-650507 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node embed-certs-650507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node embed-certs-650507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node embed-certs-650507 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s                  node-controller  Node embed-certs-650507 event: Registered Node embed-certs-650507 in Controller
	
	
	==> dmesg <==
	[  +0.052009] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.751809] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956284] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561546] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.354312] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.062170] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065788] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.188111] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.107716] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.282249] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[Sep24 01:04] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.116916] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.071985] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.518051] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.817598] kauditd_printk_skb: 85 callbacks suppressed
	[Sep24 01:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.605181] systemd-fstab-generator[2578]: Ignoring "noauto" option for root device
	[  +4.564894] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.995677] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +5.282686] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.120909] systemd-fstab-generator[3045]: Ignoring "noauto" option for root device
	[Sep24 01:09] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef] <==
	{"level":"info","ts":"2024-09-24T01:08:49.147855Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T01:08:49.148030Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2024-09-24T01:08:49.148047Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.104:2380"}
	{"level":"info","ts":"2024-09-24T01:08:49.149489Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T01:08:49.149433Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"223628dc6b2f68bd","initial-advertise-peer-urls":["https://192.168.39.104:2380"],"listen-peer-urls":["https://192.168.39.104:2380"],"advertise-client-urls":["https://192.168.39.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T01:08:49.272742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T01:08:49.272910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T01:08:49.272932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgPreVoteResp from 223628dc6b2f68bd at term 1"}
	{"level":"info","ts":"2024-09-24T01:08:49.272984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.272992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgVoteResp from 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.273001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became leader at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.273008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 223628dc6b2f68bd elected leader 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.277843Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.278857Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"223628dc6b2f68bd","local-member-attributes":"{Name:embed-certs-650507 ClientURLs:[https://192.168.39.104:2379]}","request-path":"/0/members/223628dc6b2f68bd/attributes","cluster-id":"bcba49d8b8764a98","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T01:08:49.278899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:08:49.279587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:08:49.282936Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:08:49.284725Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T01:08:49.284762Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T01:08:49.284900Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.285019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.285057Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.285764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:08:49.291039Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.104:2379"}
	{"level":"info","ts":"2024-09-24T01:08:49.292290Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:18:13 up 14 min,  0 users,  load average: 0.36, 0.20, 0.17
	Linux embed-certs-650507 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91] <==
	W0924 01:13:52.504364       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:13:52.504737       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:13:52.505881       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:13:52.505884       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:14:52.506375       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 01:14:52.506391       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:14:52.506720       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 01:14:52.506731       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:14:52.507919       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:14:52.507957       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:16:52.508154       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:16:52.508288       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 01:16:52.508183       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:16:52.508324       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 01:16:52.509359       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:16:52.509403       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018] <==
	W0924 01:08:44.660457       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.666510       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.728319       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.781951       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.793848       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.794095       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.851642       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.865337       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.886477       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.922309       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.925848       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.981370       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.006059       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.038422       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.044044       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.056137       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.130998       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.167968       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.194158       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.299543       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.313429       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.376271       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.401982       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.546206       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.716262       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9] <==
	E0924 01:12:58.498219       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:12:58.948700       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:13:28.504948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:13:28.958682       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:13:58.512158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:13:58.968998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:14:10.271531       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-650507"
	E0924 01:14:28.521969       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:14:28.977287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:14:58.528739       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:14:58.986122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:15:06.396746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="321.489µs"
	I0924 01:15:21.385108       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="185.741µs"
	E0924 01:15:28.535708       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:15:28.995450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:15:58.544349       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:15:59.005762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:16:28.552963       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:16:29.013150       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:16:58.560434       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:16:59.021398       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:17:28.567358       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:17:29.031112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:17:58.584421       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:17:59.044416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 01:09:00.563125       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 01:09:00.631243       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.104"]
	E0924 01:09:00.631325       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 01:09:00.973706       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 01:09:00.973854       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 01:09:00.973956       1 server_linux.go:169] "Using iptables Proxier"
	I0924 01:09:00.977098       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 01:09:00.977605       1 server.go:483] "Version info" version="v1.31.1"
	I0924 01:09:00.977867       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:09:00.980725       1 config.go:105] "Starting endpoint slice config controller"
	I0924 01:09:00.980756       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 01:09:00.988631       1 config.go:199] "Starting service config controller"
	I0924 01:09:00.991221       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 01:09:00.994164       1 config.go:328] "Starting node config controller"
	I0924 01:09:00.994276       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 01:09:01.080992       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 01:09:01.094479       1 shared_informer.go:320] Caches are synced for service config
	I0924 01:09:01.095355       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5] <==
	W0924 01:08:51.545018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:51.545323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:51.545026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:51.545357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:51.545767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 01:08:51.545899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.363026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 01:08:52.363187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.418338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 01:08:52.418828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.440158       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 01:08:52.440342       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 01:08:52.515830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:52.515991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.600884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 01:08:52.601016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.667702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:52.667787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.716296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 01:08:52.716382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.813987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:52.814467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.841346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 01:08:52.841393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 01:08:54.533841       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 01:17:04 embed-certs-650507 kubelet[2905]: E0924 01:17:04.367609    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:17:04 embed-certs-650507 kubelet[2905]: E0924 01:17:04.539641    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140624539154216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:04 embed-certs-650507 kubelet[2905]: E0924 01:17:04.539843    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140624539154216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:14 embed-certs-650507 kubelet[2905]: E0924 01:17:14.541716    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140634541167676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:14 embed-certs-650507 kubelet[2905]: E0924 01:17:14.542291    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140634541167676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:19 embed-certs-650507 kubelet[2905]: E0924 01:17:19.367799    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:17:24 embed-certs-650507 kubelet[2905]: E0924 01:17:24.545444    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140644544787224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:24 embed-certs-650507 kubelet[2905]: E0924 01:17:24.545484    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140644544787224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:30 embed-certs-650507 kubelet[2905]: E0924 01:17:30.369790    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:17:34 embed-certs-650507 kubelet[2905]: E0924 01:17:34.547539    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140654547249973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:34 embed-certs-650507 kubelet[2905]: E0924 01:17:34.547696    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140654547249973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:41 embed-certs-650507 kubelet[2905]: E0924 01:17:41.367182    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:17:44 embed-certs-650507 kubelet[2905]: E0924 01:17:44.550111    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140664549726781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:44 embed-certs-650507 kubelet[2905]: E0924 01:17:44.550435    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140664549726781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:54 embed-certs-650507 kubelet[2905]: E0924 01:17:54.381330    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 01:17:54 embed-certs-650507 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 01:17:54 embed-certs-650507 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 01:17:54 embed-certs-650507 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 01:17:54 embed-certs-650507 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 01:17:54 embed-certs-650507 kubelet[2905]: E0924 01:17:54.551613    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140674551332009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:54 embed-certs-650507 kubelet[2905]: E0924 01:17:54.551651    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140674551332009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:17:56 embed-certs-650507 kubelet[2905]: E0924 01:17:56.367782    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:18:04 embed-certs-650507 kubelet[2905]: E0924 01:18:04.553923    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140684553451763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:18:04 embed-certs-650507 kubelet[2905]: E0924 01:18:04.554211    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140684553451763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:18:08 embed-certs-650507 kubelet[2905]: E0924 01:18:08.367321    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	
	
	==> storage-provisioner [893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1] <==
	I0924 01:09:01.668873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 01:09:01.683354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 01:09:01.683454       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 01:09:01.694317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 01:09:01.694732       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-650507_86e1500b-f31a-424e-b809-06721c823370!
	I0924 01:09:01.694911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"147b93bc-c19d-4705-8f8d-573893a60402", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-650507_86e1500b-f31a-424e-b809-06721c823370 became leader
	I0924 01:09:01.795679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-650507_86e1500b-f31a-424e-b809-06721c823370!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-650507 -n embed-certs-650507
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-650507 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lbm9h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-650507 describe pod metrics-server-6867b74b74-lbm9h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-650507 describe pod metrics-server-6867b74b74-lbm9h: exit status 1 (63.876888ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lbm9h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-650507 describe pod metrics-server-6867b74b74-lbm9h: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-674057 -n no-preload-674057
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-24 01:19:56.11718156 +0000 UTC m=+6137.548264007
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-674057 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-674057 logs -n 25: (2.202402s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-075175                              | stopped-upgrade-075175       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-619300                           | kubernetes-upgrade-619300    | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:00:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:00:40.983605   61989 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:00:40.983716   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983722   61989 out.go:358] Setting ErrFile to fd 2...
	I0924 01:00:40.983728   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983918   61989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:00:40.984500   61989 out.go:352] Setting JSON to false
	I0924 01:00:40.985412   61989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6185,"bootTime":1727133456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:00:40.985513   61989 start.go:139] virtualization: kvm guest
	I0924 01:00:40.987848   61989 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:00:40.989366   61989 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:00:40.989467   61989 notify.go:220] Checking for updates...
	I0924 01:00:40.992462   61989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:00:40.994144   61989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:00:40.995782   61989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:00:40.997503   61989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:00:40.999038   61989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:00:41.000959   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:00:41.001315   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.001388   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.017304   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0924 01:00:41.017751   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.018320   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.018355   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.018708   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.018964   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.021075   61989 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:00:41.022764   61989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:00:41.023156   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.023204   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.038764   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0924 01:00:41.039238   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.039828   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.039856   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.040272   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.040569   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.078622   61989 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:00:41.079930   61989 start.go:297] selected driver: kvm2
	I0924 01:00:41.079945   61989 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.080076   61989 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:00:41.080841   61989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.080927   61989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:00:41.096851   61989 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:00:41.097306   61989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:00:41.097345   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:00:41.097410   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:00:41.097465   61989 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.097610   61989 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.099797   61989 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 01:00:39.376584   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:41.101644   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:00:41.101691   61989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 01:00:41.101704   61989 cache.go:56] Caching tarball of preloaded images
	I0924 01:00:41.101801   61989 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:00:41.101816   61989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 01:00:41.101922   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:00:41.102126   61989 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:00:45.456606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:48.528618   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:54.608639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:57.680645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:03.760641   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:06.832676   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:12.912635   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:15.984629   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:22.064669   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:25.136609   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:31.216643   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:34.288667   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:40.368636   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:43.440700   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:49.520634   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:52.592658   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:58.672637   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:01.744679   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:07.824597   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:10.896693   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:16.976656   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:20.048675   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:26.128638   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:29.200595   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:35.280645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:38.352665   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:44.432606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:47.504721   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:53.584645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:56.656617   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:02.736686   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:05.808671   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:11.888586   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:14.960688   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:21.040639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:24.112705   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:30.192631   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:33.264655   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:36.269218   61323 start.go:364] duration metric: took 4m25.932369998s to acquireMachinesLock for "embed-certs-650507"
	I0924 01:03:36.269290   61323 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:36.269298   61323 fix.go:54] fixHost starting: 
	I0924 01:03:36.269661   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:36.269714   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:36.285429   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0924 01:03:36.285943   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:36.286516   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:03:36.286557   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:36.286885   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:36.287078   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:36.287213   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:03:36.288895   61323 fix.go:112] recreateIfNeeded on embed-certs-650507: state=Stopped err=<nil>
	I0924 01:03:36.288917   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	W0924 01:03:36.289113   61323 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:36.291435   61323 out.go:177] * Restarting existing kvm2 VM for "embed-certs-650507" ...
	I0924 01:03:36.266390   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:36.266435   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.266788   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:03:36.266816   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.267022   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:03:36.269105   61070 machine.go:96] duration metric: took 4m37.426687547s to provisionDockerMachine
	I0924 01:03:36.269142   61070 fix.go:56] duration metric: took 4m37.448766856s for fixHost
	I0924 01:03:36.269148   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 4m37.448847609s
	W0924 01:03:36.269167   61070 start.go:714] error starting host: provision: host is not running
	W0924 01:03:36.269264   61070 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 01:03:36.269274   61070 start.go:729] Will try again in 5 seconds ...
	I0924 01:03:36.293006   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Start
	I0924 01:03:36.293199   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring networks are active...
	I0924 01:03:36.294032   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network default is active
	I0924 01:03:36.294359   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network mk-embed-certs-650507 is active
	I0924 01:03:36.294718   61323 main.go:141] libmachine: (embed-certs-650507) Getting domain xml...
	I0924 01:03:36.295407   61323 main.go:141] libmachine: (embed-certs-650507) Creating domain...
	I0924 01:03:37.516049   61323 main.go:141] libmachine: (embed-certs-650507) Waiting to get IP...
	I0924 01:03:37.516959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.517374   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.517443   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.517352   62594 retry.go:31] will retry after 278.072635ms: waiting for machine to come up
	I0924 01:03:37.796796   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.797276   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.797301   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.797242   62594 retry.go:31] will retry after 387.413297ms: waiting for machine to come up
	I0924 01:03:38.185869   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.186239   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.186258   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.186193   62594 retry.go:31] will retry after 363.798568ms: waiting for machine to come up
	I0924 01:03:38.551772   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.552181   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.552221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.552122   62594 retry.go:31] will retry after 392.798012ms: waiting for machine to come up
	I0924 01:03:38.946523   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.947069   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.947097   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.947018   62594 retry.go:31] will retry after 541.413772ms: waiting for machine to come up
	I0924 01:03:39.489873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:39.490278   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:39.490307   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:39.490226   62594 retry.go:31] will retry after 804.62107ms: waiting for machine to come up
	I0924 01:03:41.271024   61070 start.go:360] acquireMachinesLock for no-preload-674057: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:03:40.296290   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:40.296775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:40.296806   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:40.296726   62594 retry.go:31] will retry after 882.018637ms: waiting for machine to come up
	I0924 01:03:41.180799   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:41.181242   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:41.181263   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:41.181197   62594 retry.go:31] will retry after 961.194045ms: waiting for machine to come up
	I0924 01:03:42.143878   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:42.144354   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:42.144379   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:42.144270   62594 retry.go:31] will retry after 1.647837023s: waiting for machine to come up
	I0924 01:03:43.793458   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:43.793892   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:43.793933   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:43.793873   62594 retry.go:31] will retry after 1.751902059s: waiting for machine to come up
	I0924 01:03:45.547905   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:45.548356   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:45.548388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:45.548313   62594 retry.go:31] will retry after 2.380106471s: waiting for machine to come up
	I0924 01:03:47.931021   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:47.931513   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:47.931537   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:47.931456   62594 retry.go:31] will retry after 2.395516641s: waiting for machine to come up
	I0924 01:03:50.328214   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:50.328766   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:50.328791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:50.328729   62594 retry.go:31] will retry after 4.41219579s: waiting for machine to come up
	I0924 01:03:54.745159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745572   61323 main.go:141] libmachine: (embed-certs-650507) Found IP for machine: 192.168.39.104
	I0924 01:03:54.745606   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has current primary IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745615   61323 main.go:141] libmachine: (embed-certs-650507) Reserving static IP address...
	I0924 01:03:54.746020   61323 main.go:141] libmachine: (embed-certs-650507) Reserved static IP address: 192.168.39.104
	I0924 01:03:54.746042   61323 main.go:141] libmachine: (embed-certs-650507) Waiting for SSH to be available...
	I0924 01:03:54.746067   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.746134   61323 main.go:141] libmachine: (embed-certs-650507) DBG | skip adding static IP to network mk-embed-certs-650507 - found existing host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"}
	I0924 01:03:54.746159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Getting to WaitForSSH function...
	I0924 01:03:54.748464   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.748871   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.748906   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.749083   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH client type: external
	I0924 01:03:54.749118   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa (-rw-------)
	I0924 01:03:54.749153   61323 main.go:141] libmachine: (embed-certs-650507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:03:54.749165   61323 main.go:141] libmachine: (embed-certs-650507) DBG | About to run SSH command:
	I0924 01:03:54.749177   61323 main.go:141] libmachine: (embed-certs-650507) DBG | exit 0
	I0924 01:03:54.872532   61323 main.go:141] libmachine: (embed-certs-650507) DBG | SSH cmd err, output: <nil>: 
	I0924 01:03:54.872869   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetConfigRaw
	I0924 01:03:54.873480   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:54.876545   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.876922   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.876953   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.877204   61323 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/config.json ...
	I0924 01:03:54.877443   61323 machine.go:93] provisionDockerMachine start ...
	I0924 01:03:54.877467   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:54.877683   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.879873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880200   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.880221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880375   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.880546   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880681   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880866   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.881002   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.881194   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.881207   61323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:03:54.984605   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:03:54.984636   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.984922   61323 buildroot.go:166] provisioning hostname "embed-certs-650507"
	I0924 01:03:54.984948   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.985185   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.988284   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988699   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.988725   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988857   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.989069   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989344   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989529   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.989731   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.989899   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.989913   61323 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650507 && echo "embed-certs-650507" | sudo tee /etc/hostname
	I0924 01:03:55.106214   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650507
	
	I0924 01:03:55.106273   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.109000   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109310   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.109334   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109498   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.109646   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109989   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.110123   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.110303   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.110318   61323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:03:55.220699   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:55.220738   61323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:03:55.220755   61323 buildroot.go:174] setting up certificates
	I0924 01:03:55.220763   61323 provision.go:84] configureAuth start
	I0924 01:03:55.220771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:55.221112   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.224166   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224603   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.224634   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.226847   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227167   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.227194   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227308   61323 provision.go:143] copyHostCerts
	I0924 01:03:55.227386   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:03:55.227409   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:03:55.227490   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:03:55.227641   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:03:55.227653   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:03:55.227695   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:03:55.227781   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:03:55.227791   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:03:55.227826   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:03:55.227909   61323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650507 san=[127.0.0.1 192.168.39.104 embed-certs-650507 localhost minikube]
	I0924 01:03:55.917061   61699 start.go:364] duration metric: took 3m46.693519233s to acquireMachinesLock for "default-k8s-diff-port-465341"
	I0924 01:03:55.917135   61699 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:55.917144   61699 fix.go:54] fixHost starting: 
	I0924 01:03:55.917553   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:55.917606   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:55.937566   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0924 01:03:55.937971   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:55.938529   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:03:55.938556   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:55.938923   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:55.939182   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:03:55.939365   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:03:55.941155   61699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-465341: state=Stopped err=<nil>
	I0924 01:03:55.941197   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	W0924 01:03:55.941417   61699 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:55.943640   61699 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-465341" ...
	I0924 01:03:55.309866   61323 provision.go:177] copyRemoteCerts
	I0924 01:03:55.309928   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:03:55.309955   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.312946   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313365   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.313388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313638   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.313889   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.314062   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.314206   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.394427   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:03:55.420595   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 01:03:55.444377   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:03:55.467261   61323 provision.go:87] duration metric: took 246.485242ms to configureAuth
	I0924 01:03:55.467302   61323 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:03:55.467483   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:03:55.467552   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.470146   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470539   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.470572   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470719   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.470961   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471101   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471299   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.471450   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.471653   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.471676   61323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:03:55.688189   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:03:55.688218   61323 machine.go:96] duration metric: took 810.761675ms to provisionDockerMachine
	I0924 01:03:55.688230   61323 start.go:293] postStartSetup for "embed-certs-650507" (driver="kvm2")
	I0924 01:03:55.688244   61323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:03:55.688266   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.688659   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:03:55.688690   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.691375   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691761   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.691791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691881   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.692105   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.692309   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.692453   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.775412   61323 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:03:55.779423   61323 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:03:55.779448   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:03:55.779536   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:03:55.779629   61323 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:03:55.779742   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:03:55.788717   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:03:55.811673   61323 start.go:296] duration metric: took 123.428914ms for postStartSetup
	I0924 01:03:55.811717   61323 fix.go:56] duration metric: took 19.542419045s for fixHost
	I0924 01:03:55.811743   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.814745   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815034   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.815062   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815247   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.815449   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815851   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.816012   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.816168   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.816178   61323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:03:55.916845   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139835.894204557
	
	I0924 01:03:55.916883   61323 fix.go:216] guest clock: 1727139835.894204557
	I0924 01:03:55.916896   61323 fix.go:229] Guest: 2024-09-24 01:03:55.894204557 +0000 UTC Remote: 2024-09-24 01:03:55.811721448 +0000 UTC m=+285.612741728 (delta=82.483109ms)
	I0924 01:03:55.916935   61323 fix.go:200] guest clock delta is within tolerance: 82.483109ms
	I0924 01:03:55.916945   61323 start.go:83] releasing machines lock for "embed-certs-650507", held for 19.6476761s
	I0924 01:03:55.916990   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.917314   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.920105   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920550   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.920583   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920832   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921327   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921510   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921578   61323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:03:55.921634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.921747   61323 ssh_runner.go:195] Run: cat /version.json
	I0924 01:03:55.921771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.924238   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924430   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924717   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924741   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924792   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924953   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925061   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925153   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925277   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925360   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925439   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925582   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.925626   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:56.005229   61323 ssh_runner.go:195] Run: systemctl --version
	I0924 01:03:56.046189   61323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:03:56.187701   61323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:03:56.193313   61323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:03:56.193379   61323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:03:56.209278   61323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:03:56.209298   61323 start.go:495] detecting cgroup driver to use...
	I0924 01:03:56.209363   61323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:03:56.226995   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:03:56.241102   61323 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:03:56.241160   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:03:56.255002   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:03:56.269805   61323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:03:56.387382   61323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:03:56.545138   61323 docker.go:233] disabling docker service ...
	I0924 01:03:56.545220   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:03:56.559017   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:03:56.571939   61323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:03:56.694139   61323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:03:56.811253   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:03:56.825480   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:03:56.842777   61323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:03:56.842830   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.852387   61323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:03:56.852447   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.862702   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.872790   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.882864   61323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:03:56.893029   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.903314   61323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.923491   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.933424   61323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:03:56.944496   61323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:03:56.944561   61323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:03:56.957077   61323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:03:56.968602   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:03:57.080955   61323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:03:57.179826   61323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:03:57.179900   61323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:03:57.184652   61323 start.go:563] Will wait 60s for crictl version
	I0924 01:03:57.184716   61323 ssh_runner.go:195] Run: which crictl
	I0924 01:03:57.190300   61323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:03:57.239310   61323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:03:57.239371   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.266833   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.301876   61323 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:03:55.945290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Start
	I0924 01:03:55.945498   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring networks are active...
	I0924 01:03:55.946346   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network default is active
	I0924 01:03:55.946726   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network mk-default-k8s-diff-port-465341 is active
	I0924 01:03:55.947152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Getting domain xml...
	I0924 01:03:55.947872   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Creating domain...
	I0924 01:03:57.236194   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting to get IP...
	I0924 01:03:57.237037   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237445   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237497   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.237413   62713 retry.go:31] will retry after 286.244795ms: waiting for machine to come up
	I0924 01:03:57.525009   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525595   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525621   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.525548   62713 retry.go:31] will retry after 273.807213ms: waiting for machine to come up
	I0924 01:03:57.801217   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801734   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801756   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.801701   62713 retry.go:31] will retry after 371.291567ms: waiting for machine to come up
	I0924 01:03:58.174283   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174746   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174781   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.174692   62713 retry.go:31] will retry after 595.157579ms: waiting for machine to come up
	I0924 01:03:58.771428   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771900   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771925   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.771862   62713 retry.go:31] will retry after 734.305784ms: waiting for machine to come up
	I0924 01:03:57.303135   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:57.306110   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306598   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:57.306624   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306783   61323 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 01:03:57.310829   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:03:57.322605   61323 kubeadm.go:883] updating cluster {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:03:57.322715   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:03:57.322761   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:03:57.358040   61323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:03:57.358104   61323 ssh_runner.go:195] Run: which lz4
	I0924 01:03:57.361948   61323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:03:57.365911   61323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:03:57.365950   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:03:58.651636   61323 crio.go:462] duration metric: took 1.289721413s to copy over tarball
	I0924 01:03:58.651708   61323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:03:59.507803   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508308   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:59.508237   62713 retry.go:31] will retry after 875.394603ms: waiting for machine to come up
	I0924 01:04:00.385279   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385713   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385748   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:00.385655   62713 retry.go:31] will retry after 885.980109ms: waiting for machine to come up
	I0924 01:04:01.273114   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273545   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273590   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:01.273535   62713 retry.go:31] will retry after 935.451975ms: waiting for machine to come up
	I0924 01:04:02.210920   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211423   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:02.211331   62713 retry.go:31] will retry after 1.254573538s: waiting for machine to come up
	I0924 01:04:03.467027   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467593   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467626   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:03.467488   62713 retry.go:31] will retry after 2.044247818s: waiting for machine to come up
	I0924 01:04:00.805580   61323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153837858s)
	I0924 01:04:00.805608   61323 crio.go:469] duration metric: took 2.153947595s to extract the tarball
	I0924 01:04:00.805617   61323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:00.846074   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:00.895803   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:00.895833   61323 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:00.895842   61323 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I0924 01:04:00.895966   61323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-650507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:00.896041   61323 ssh_runner.go:195] Run: crio config
	I0924 01:04:00.941958   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:00.941985   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:00.941998   61323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:00.942029   61323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650507 NodeName:embed-certs-650507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:00.942202   61323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650507"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:00.942292   61323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:00.952748   61323 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:00.952853   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:00.962984   61323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0924 01:04:00.980030   61323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:01.001571   61323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0924 01:04:01.018760   61323 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:01.022770   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:01.034816   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:01.157888   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:01.175883   61323 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507 for IP: 192.168.39.104
	I0924 01:04:01.175911   61323 certs.go:194] generating shared ca certs ...
	I0924 01:04:01.175937   61323 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:01.176134   61323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:01.176198   61323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:01.176211   61323 certs.go:256] generating profile certs ...
	I0924 01:04:01.176324   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/client.key
	I0924 01:04:01.176441   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key.86682f38
	I0924 01:04:01.176515   61323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key
	I0924 01:04:01.176640   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:01.176669   61323 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:01.176678   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:01.176713   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:01.176749   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:01.176778   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:01.176987   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:01.177918   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:01.221682   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:01.266005   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:01.299467   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:01.324598   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 01:04:01.349526   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:01.385589   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:01.409713   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:04:01.433745   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:01.457493   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:01.482197   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:01.505740   61323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:01.524029   61323 ssh_runner.go:195] Run: openssl version
	I0924 01:04:01.530147   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:01.541117   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545823   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545894   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.551638   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:01.562373   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:01.573502   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578561   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578634   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.584415   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:01.595312   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:01.606503   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611530   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611602   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.618484   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:01.629332   61323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:01.634238   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:01.640266   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:01.646306   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:01.652510   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:01.658237   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:01.663962   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:01.669998   61323 kubeadm.go:392] StartCluster: {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:01.670105   61323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:01.670162   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.706478   61323 cri.go:89] found id: ""
	I0924 01:04:01.706555   61323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:01.717106   61323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:01.717127   61323 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:01.717188   61323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:01.729966   61323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:01.730947   61323 kubeconfig.go:125] found "embed-certs-650507" server: "https://192.168.39.104:8443"
	I0924 01:04:01.732933   61323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:01.745538   61323 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.104
	I0924 01:04:01.745581   61323 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:01.745594   61323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:01.745649   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.783313   61323 cri.go:89] found id: ""
	I0924 01:04:01.783423   61323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:01.801432   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:01.811282   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:01.811308   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:01.811371   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:01.820717   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:01.820780   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:01.830289   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:01.839383   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:01.839449   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:01.848920   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.857986   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:01.858045   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.867465   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:01.876598   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:01.876680   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:01.886122   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:01.896245   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:02.004839   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.077983   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073104284s)
	I0924 01:04:03.078020   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.295254   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.369968   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.458283   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:03.458383   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:03.958648   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.459039   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.958614   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.994450   61323 api_server.go:72] duration metric: took 1.536167442s to wait for apiserver process to appear ...
	I0924 01:04:04.994485   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:04.994530   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:04.995139   61323 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0924 01:04:05.513732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514247   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514275   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:05.514201   62713 retry.go:31] will retry after 2.814717647s: waiting for machine to come up
	I0924 01:04:08.331550   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331964   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:08.331932   62713 retry.go:31] will retry after 2.942261445s: waiting for machine to come up
	I0924 01:04:05.495090   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:07.946057   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:07.946116   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:07.946135   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.018665   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.018711   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.018729   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.027105   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.027144   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.494630   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.500471   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.500494   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.995055   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.017236   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:09.017272   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:09.494769   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.500285   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:04:09.507440   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:09.507470   61323 api_server.go:131] duration metric: took 4.512953508s to wait for apiserver health ...
	I0924 01:04:09.507478   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:09.507485   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:09.509661   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:09.511104   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:09.529080   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:09.567695   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:09.579425   61323 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:09.579470   61323 system_pods.go:61] "coredns-7c65d6cfc9-xgs6g" [b975196f-e9e6-4e30-a49b-8d3031f73a21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:09.579489   61323 system_pods.go:61] "etcd-embed-certs-650507" [c24d7e21-08a8-42bd-9def-1808d8a58e07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:09.579501   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f1de6ed5-a87f-4d1d-8feb-d0f80851b5b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:09.579509   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [d0d454bf-b9d3-4dcb-957c-f1329e4e9e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:09.579516   61323 system_pods.go:61] "kube-proxy-qd4lg" [f06c009f-3c62-4e54-82fd-ca468fb05bbc] Running
	I0924 01:04:09.579523   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [e4931370-821e-4289-9b2b-9b46d9f8394e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:09.579532   61323 system_pods.go:61] "metrics-server-6867b74b74-pc28v" [688d7bbe-9fee-450f-aecf-bbb3413a3633] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:09.579536   61323 system_pods.go:61] "storage-provisioner" [9e354a3c-e4f1-46e1-b5fb-de8243f41c29] Running
	I0924 01:04:09.579542   61323 system_pods.go:74] duration metric: took 11.824796ms to wait for pod list to return data ...
	I0924 01:04:09.579550   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:09.584175   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:09.584203   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:09.584214   61323 node_conditions.go:105] duration metric: took 4.659859ms to run NodePressure ...
	I0924 01:04:09.584230   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:09.847130   61323 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:09.851985   61323 kubeadm.go:739] kubelet initialised
	I0924 01:04:09.852008   61323 kubeadm.go:740] duration metric: took 4.853319ms waiting for restarted kubelet to initialise ...
	I0924 01:04:09.852015   61323 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:09.857149   61323 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:11.275680   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276135   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276166   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:11.276102   62713 retry.go:31] will retry after 3.599939746s: waiting for machine to come up
	I0924 01:04:11.865712   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:13.864779   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:13.864801   61323 pod_ready.go:82] duration metric: took 4.007625744s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:13.864809   61323 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.233175   61989 start.go:364] duration metric: took 3m35.131018203s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 01:04:16.233254   61989 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:16.233262   61989 fix.go:54] fixHost starting: 
	I0924 01:04:16.233733   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:16.233787   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:16.255690   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0924 01:04:16.256135   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:16.256729   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:04:16.256763   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:16.257122   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:16.257365   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:16.257560   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 01:04:16.259055   61989 fix.go:112] recreateIfNeeded on old-k8s-version-171598: state=Stopped err=<nil>
	I0924 01:04:16.259091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	W0924 01:04:16.259266   61989 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:16.261327   61989 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	I0924 01:04:14.879977   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880533   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has current primary IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880563   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Found IP for machine: 192.168.61.186
	I0924 01:04:14.880596   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserving static IP address...
	I0924 01:04:14.881148   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.881171   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | skip adding static IP to network mk-default-k8s-diff-port-465341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"}
	I0924 01:04:14.881188   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserved static IP address: 192.168.61.186
	I0924 01:04:14.881216   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for SSH to be available...
	I0924 01:04:14.881229   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Getting to WaitForSSH function...
	I0924 01:04:14.883679   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884060   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.884083   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884214   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH client type: external
	I0924 01:04:14.884248   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa (-rw-------)
	I0924 01:04:14.884276   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:14.884287   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | About to run SSH command:
	I0924 01:04:14.884298   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | exit 0
	I0924 01:04:15.012764   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:15.013163   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetConfigRaw
	I0924 01:04:15.013983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.016664   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017173   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.017207   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017440   61699 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/config.json ...
	I0924 01:04:15.017668   61699 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:15.017687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.017915   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.020388   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.020816   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.020839   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.021074   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.021249   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021513   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021681   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.021850   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.022031   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.022041   61699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:15.132672   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:15.132706   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.132994   61699 buildroot.go:166] provisioning hostname "default-k8s-diff-port-465341"
	I0924 01:04:15.133025   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.133268   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.135929   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136371   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.136399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136578   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.136850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137008   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137193   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.137407   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.137589   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.137610   61699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-465341 && echo "default-k8s-diff-port-465341" | sudo tee /etc/hostname
	I0924 01:04:15.262142   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-465341
	
	I0924 01:04:15.262174   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.265359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265736   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.265761   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265962   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.266176   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266335   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266510   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.266705   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.266903   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.266926   61699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-465341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-465341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-465341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:15.385085   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:15.385122   61699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:15.385158   61699 buildroot.go:174] setting up certificates
	I0924 01:04:15.385174   61699 provision.go:84] configureAuth start
	I0924 01:04:15.385186   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.385556   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.388350   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388798   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.388828   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388985   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.391478   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391793   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.391823   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391952   61699 provision.go:143] copyHostCerts
	I0924 01:04:15.392016   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:15.392045   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:15.392115   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:15.392259   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:15.392272   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:15.392306   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:15.392406   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:15.392415   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:15.392440   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:15.392503   61699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-465341 san=[127.0.0.1 192.168.61.186 default-k8s-diff-port-465341 localhost minikube]
	I0924 01:04:15.572588   61699 provision.go:177] copyRemoteCerts
	I0924 01:04:15.572682   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:15.572718   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.575884   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.576401   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.576868   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.577099   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.577248   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:15.662231   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:15.686800   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 01:04:15.709860   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:04:15.738063   61699 provision.go:87] duration metric: took 352.876914ms to configureAuth
	I0924 01:04:15.738105   61699 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:15.738302   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:15.738420   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.741231   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741644   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.741693   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741835   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.742036   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742218   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.742526   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.742727   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.742754   61699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:15.986096   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:15.986128   61699 machine.go:96] duration metric: took 968.446778ms to provisionDockerMachine
	I0924 01:04:15.986143   61699 start.go:293] postStartSetup for "default-k8s-diff-port-465341" (driver="kvm2")
	I0924 01:04:15.986156   61699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:15.986183   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.986639   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:15.986674   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.989692   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990094   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.990124   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990407   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.990643   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.990826   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.990958   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.079174   61699 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:16.083139   61699 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:16.083168   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:16.083251   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:16.083363   61699 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:16.083486   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:16.094571   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:16.117327   61699 start.go:296] duration metric: took 131.16913ms for postStartSetup
	I0924 01:04:16.117364   61699 fix.go:56] duration metric: took 20.200222398s for fixHost
	I0924 01:04:16.117384   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.120507   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.120857   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.120899   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.121059   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.121325   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121511   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.121901   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:16.122100   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:16.122113   61699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:16.232986   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139856.205476339
	
	I0924 01:04:16.233013   61699 fix.go:216] guest clock: 1727139856.205476339
	I0924 01:04:16.233024   61699 fix.go:229] Guest: 2024-09-24 01:04:16.205476339 +0000 UTC Remote: 2024-09-24 01:04:16.117368802 +0000 UTC m=+247.038042336 (delta=88.107537ms)
	I0924 01:04:16.233086   61699 fix.go:200] guest clock delta is within tolerance: 88.107537ms
	I0924 01:04:16.233094   61699 start.go:83] releasing machines lock for "default-k8s-diff-port-465341", held for 20.315992151s
	I0924 01:04:16.233133   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.233491   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:16.236719   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237104   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.237134   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.237850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238019   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238116   61699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:16.238167   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.238227   61699 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:16.238260   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.241123   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241448   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241598   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241916   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.241982   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.242152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242225   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242351   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242479   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242543   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.242880   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.368841   61699 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:16.374990   61699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:16.521604   61699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:16.527198   61699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:16.527290   61699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:16.543251   61699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:16.543278   61699 start.go:495] detecting cgroup driver to use...
	I0924 01:04:16.543357   61699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:16.561775   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:16.576028   61699 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:16.576097   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:16.591757   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:16.607927   61699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:16.753944   61699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:16.917338   61699 docker.go:233] disabling docker service ...
	I0924 01:04:16.917401   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:16.935104   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:16.949717   61699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:17.088275   61699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:17.222093   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:17.236370   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:17.256277   61699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:17.256360   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.266516   61699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:17.266600   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.276647   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.288283   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.299232   61699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:17.311336   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.329416   61699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.351465   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.362248   61699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:17.372102   61699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:17.372154   61699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:17.392055   61699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:17.413641   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:17.541224   61699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:17.655205   61699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:17.655281   61699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:17.660096   61699 start.go:563] Will wait 60s for crictl version
	I0924 01:04:17.660163   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:04:17.663880   61699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:17.706878   61699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:17.706959   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.735377   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.766744   61699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:17.768253   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:17.771534   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.771952   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:17.771983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.772230   61699 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:17.776486   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:17.792599   61699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:17.792744   61699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:17.792813   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:17.831837   61699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:17.831929   61699 ssh_runner.go:195] Run: which lz4
	I0924 01:04:17.836193   61699 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:17.840562   61699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:17.840596   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:04:15.871512   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:15.871540   61323 pod_ready.go:82] duration metric: took 2.006723245s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:15.871552   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879872   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:17.879899   61323 pod_ready.go:82] duration metric: took 2.008337801s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879918   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888007   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.888041   61323 pod_ready.go:82] duration metric: took 2.008114424s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888056   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894805   61323 pod_ready.go:93] pod "kube-proxy-qd4lg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.894844   61323 pod_ready.go:82] duration metric: took 6.779022ms for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894862   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900353   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.900387   61323 pod_ready.go:82] duration metric: took 5.513733ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900401   61323 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.262929   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .Start
	I0924 01:04:16.263123   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 01:04:16.264062   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 01:04:16.264543   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 01:04:16.264954   61989 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 01:04:16.265899   61989 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 01:04:17.566157   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 01:04:17.567223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.567644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.567724   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.567625   62886 retry.go:31] will retry after 301.652575ms: waiting for machine to come up
	I0924 01:04:17.871163   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.871700   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.871729   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.871645   62886 retry.go:31] will retry after 337.632324ms: waiting for machine to come up
	I0924 01:04:18.211081   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.211954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.212013   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.211892   62886 retry.go:31] will retry after 431.70455ms: waiting for machine to come up
	I0924 01:04:18.645408   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.646017   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.646044   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.645958   62886 retry.go:31] will retry after 582.966569ms: waiting for machine to come up
	I0924 01:04:19.230457   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.230954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.230980   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.230897   62886 retry.go:31] will retry after 720.62326ms: waiting for machine to come up
	I0924 01:04:19.953023   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.953570   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.953603   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.953512   62886 retry.go:31] will retry after 688.597177ms: waiting for machine to come up
	I0924 01:04:20.644150   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:20.644636   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:20.644672   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:20.644578   62886 retry.go:31] will retry after 1.084671138s: waiting for machine to come up
	I0924 01:04:19.165501   61699 crio.go:462] duration metric: took 1.329329949s to copy over tarball
	I0924 01:04:19.165575   61699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:21.323478   61699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157877766s)
	I0924 01:04:21.323509   61699 crio.go:469] duration metric: took 2.157979404s to extract the tarball
	I0924 01:04:21.323516   61699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:21.360397   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:21.401282   61699 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:21.401309   61699 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:21.401319   61699 kubeadm.go:934] updating node { 192.168.61.186 8444 v1.31.1 crio true true} ...
	I0924 01:04:21.401441   61699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-465341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:21.401524   61699 ssh_runner.go:195] Run: crio config
	I0924 01:04:21.447706   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:21.447730   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:21.447741   61699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:21.447766   61699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.186 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-465341 NodeName:default-k8s-diff-port-465341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:21.447939   61699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-465341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:21.448022   61699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:21.457882   61699 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:21.457967   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:21.467329   61699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 01:04:21.483464   61699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:21.500880   61699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 01:04:21.517179   61699 ssh_runner.go:195] Run: grep 192.168.61.186	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:21.521032   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:21.532339   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:21.655583   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:21.671964   61699 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341 for IP: 192.168.61.186
	I0924 01:04:21.672019   61699 certs.go:194] generating shared ca certs ...
	I0924 01:04:21.672044   61699 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:21.672273   61699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:21.672390   61699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:21.672409   61699 certs.go:256] generating profile certs ...
	I0924 01:04:21.672536   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.key
	I0924 01:04:21.672629   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key.b6f5ff18
	I0924 01:04:21.672696   61699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key
	I0924 01:04:21.672940   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:21.672987   61699 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:21.672999   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:21.673029   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:21.673060   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:21.673091   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:21.673133   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:21.673884   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:21.706165   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:21.735352   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:21.763358   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:21.786284   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 01:04:21.814844   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:21.839773   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:21.866549   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:21.889901   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:21.914875   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:21.939116   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:21.963264   61699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:21.980912   61699 ssh_runner.go:195] Run: openssl version
	I0924 01:04:21.986725   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:21.998128   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002832   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002903   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.008847   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:22.019274   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:22.030110   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035920   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035996   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.043505   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:22.057224   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:22.067596   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.071957   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.072029   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.077495   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:22.087627   61699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:22.092049   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:22.097908   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:22.103716   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:22.109871   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:22.116088   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:22.121760   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:22.127473   61699 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:22.127563   61699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:22.127613   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.167951   61699 cri.go:89] found id: ""
	I0924 01:04:22.168054   61699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:22.177878   61699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:22.177898   61699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:22.177949   61699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:22.187116   61699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:22.188577   61699 kubeconfig.go:125] found "default-k8s-diff-port-465341" server: "https://192.168.61.186:8444"
	I0924 01:04:22.191744   61699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:22.200936   61699 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.186
	I0924 01:04:22.200967   61699 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:22.200979   61699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:22.201039   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.247804   61699 cri.go:89] found id: ""
	I0924 01:04:22.247888   61699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:22.263853   61699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:22.273254   61699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:22.273271   61699 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:22.273327   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 01:04:22.281724   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:22.281790   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:22.290823   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 01:04:22.299422   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:22.299482   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:22.308961   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.317922   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:22.318010   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.326980   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 01:04:22.335995   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:22.336084   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:22.345002   61699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:22.354302   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:22.462157   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.380163   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.610795   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.679134   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.747119   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:23.747191   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:21.909834   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:24.104163   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:21.730823   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:21.731385   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:21.731411   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:21.731351   62886 retry.go:31] will retry after 1.051424847s: waiting for machine to come up
	I0924 01:04:22.784644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:22.785194   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:22.785223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:22.785138   62886 retry.go:31] will retry after 1.750498954s: waiting for machine to come up
	I0924 01:04:24.537680   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:24.538085   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:24.538109   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:24.538039   62886 retry.go:31] will retry after 2.015183238s: waiting for machine to come up
	I0924 01:04:24.247859   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:24.748076   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.248220   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.747481   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.774137   61699 api_server.go:72] duration metric: took 2.027016323s to wait for apiserver process to appear ...
	I0924 01:04:25.774167   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:25.774194   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:25.774901   61699 api_server.go:269] stopped: https://192.168.61.186:8444/healthz: Get "https://192.168.61.186:8444/healthz": dial tcp 192.168.61.186:8444: connect: connection refused
	I0924 01:04:26.275226   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.290581   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.290621   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.290637   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.321353   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.321386   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.775068   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.779873   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:28.779896   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:26.408349   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:28.409816   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:26.555221   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:26.555674   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:26.555695   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:26.555634   62886 retry.go:31] will retry after 2.568414115s: waiting for machine to come up
	I0924 01:04:29.127625   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:29.128130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:29.128149   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:29.128108   62886 retry.go:31] will retry after 2.207252231s: waiting for machine to come up
	I0924 01:04:29.275326   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.284304   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.284360   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:29.774975   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.779470   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.779503   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.275137   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.279256   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.279287   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.774874   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.779081   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.779110   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.275163   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.279417   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:31.279446   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.775022   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.780092   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:04:31.787643   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:31.787672   61699 api_server.go:131] duration metric: took 6.013498176s to wait for apiserver health ...
	I0924 01:04:31.787680   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:31.787686   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:31.789733   61699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:31.791140   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:31.801441   61699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:31.819890   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:31.828128   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:31.828160   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:31.828168   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:31.828177   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:31.828186   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:31.828191   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:04:31.828196   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:31.828200   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:31.828203   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:04:31.828209   61699 system_pods.go:74] duration metric: took 8.300337ms to wait for pod list to return data ...
	I0924 01:04:31.828215   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:31.831528   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:31.831550   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:31.831561   61699 node_conditions.go:105] duration metric: took 3.341719ms to run NodePressure ...
	I0924 01:04:31.831576   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:32.101590   61699 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105656   61699 kubeadm.go:739] kubelet initialised
	I0924 01:04:32.105679   61699 kubeadm.go:740] duration metric: took 4.062709ms waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105691   61699 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:32.110237   61699 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.115057   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115090   61699 pod_ready.go:82] duration metric: took 4.825694ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.115102   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115110   61699 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.119506   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119534   61699 pod_ready.go:82] duration metric: took 4.415876ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.119546   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119558   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.124199   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124248   61699 pod_ready.go:82] duration metric: took 4.660764ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.124266   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124285   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.223553   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223596   61699 pod_ready.go:82] duration metric: took 99.284751ms for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.223606   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223613   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.622500   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622527   61699 pod_ready.go:82] duration metric: took 398.907418ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.622538   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622545   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.023370   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023430   61699 pod_ready.go:82] duration metric: took 400.874003ms for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.023458   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023472   61699 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.422810   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422841   61699 pod_ready.go:82] duration metric: took 399.35051ms for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.422851   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422859   61699 pod_ready.go:39] duration metric: took 1.317159668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:33.422874   61699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:04:33.434449   61699 ops.go:34] apiserver oom_adj: -16
	I0924 01:04:33.434473   61699 kubeadm.go:597] duration metric: took 11.256568213s to restartPrimaryControlPlane
	I0924 01:04:33.434481   61699 kubeadm.go:394] duration metric: took 11.307014166s to StartCluster
	I0924 01:04:33.434501   61699 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.434571   61699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:33.436172   61699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.436515   61699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:04:33.436732   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:33.436686   61699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:04:33.436809   61699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436815   61699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436830   61699 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-465341"
	I0924 01:04:33.436832   61699 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436864   61699 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.436877   61699 addons.go:243] addon metrics-server should already be in state true
	I0924 01:04:33.436908   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	W0924 01:04:33.436842   61699 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:04:33.436935   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.436831   61699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-465341"
	I0924 01:04:33.437322   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437370   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437377   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437412   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437458   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437483   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.438259   61699 out.go:177] * Verifying Kubernetes components...
	I0924 01:04:33.439923   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:33.453108   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0924 01:04:33.453545   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I0924 01:04:33.453608   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.453916   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.454125   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454152   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454461   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454486   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454494   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.454806   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.455065   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455111   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.455360   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455404   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.456716   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0924 01:04:33.457163   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.457688   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.457727   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.458031   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.458242   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.461814   61699 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.461835   61699 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:04:33.461864   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.462230   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.462273   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.471783   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0924 01:04:33.472043   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0924 01:04:33.472300   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472550   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472858   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.472875   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.472994   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.473003   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.473234   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473366   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473413   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.473503   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.475140   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.475553   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.477287   61699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:04:33.477293   61699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:33.478708   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:04:33.478720   61699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:04:33.478737   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478836   61699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.478863   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:04:33.478889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478971   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0924 01:04:33.479636   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.480029   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.480041   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.480396   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.482306   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.482343   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.483280   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483373   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483769   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483873   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483892   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483958   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484111   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484236   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484255   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484413   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.484472   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484738   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484866   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.519981   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0924 01:04:33.520440   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.520996   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.521028   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.521497   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.521701   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.523331   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.523576   61699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.523591   61699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:04:33.523625   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.526668   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527211   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.527244   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527471   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.527702   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.527889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.528059   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.645903   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:33.663805   61699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:33.749720   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.751631   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:04:33.751649   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:04:33.755330   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.812231   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:04:33.812257   61699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:04:33.847216   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:33.847240   61699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:04:33.932057   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:34.781871   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026510893s)
	I0924 01:04:34.781939   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.781950   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.781887   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032127769s)
	I0924 01:04:34.782009   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782023   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782293   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782309   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782318   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782326   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782361   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782369   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782375   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782389   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782404   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782629   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782643   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782645   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782673   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782683   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.790740   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.790757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.790990   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.791010   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.791013   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.871488   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871516   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.871809   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.871826   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.871834   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871841   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.872103   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.872125   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.872117   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.872136   61699 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-465341"
	I0924 01:04:34.874133   61699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:04:30.907606   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:33.406280   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:31.337368   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:31.338025   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:31.338128   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:31.338011   62886 retry.go:31] will retry after 4.137847727s: waiting for machine to come up
	I0924 01:04:35.478410   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.478991   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.479016   61989 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 01:04:35.479029   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 01:04:35.479586   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.479607   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 01:04:35.479626   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | skip adding static IP to network mk-old-k8s-version-171598 - found existing host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"}
	I0924 01:04:35.479643   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 01:04:35.479659   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 01:04:35.482028   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482377   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.482419   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 01:04:35.482550   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 01:04:35.482585   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:35.482600   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 01:04:35.482614   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 01:04:35.613364   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:35.613847   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 01:04:35.614543   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:35.617366   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.617742   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.617774   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.618068   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:04:35.618260   61989 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:35.618279   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:35.618489   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.621130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621472   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.621497   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621722   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.621914   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622354   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.622558   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.622749   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.622760   61989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:35.736637   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:35.736661   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.736943   61989 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 01:04:35.736973   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.737151   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.739921   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740304   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.740362   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740502   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.740678   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740851   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.741218   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.741409   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.741423   61989 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 01:04:35.866963   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 01:04:35.866994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.870342   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.870860   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.870893   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.871145   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.871406   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871638   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871850   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.872050   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.872253   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.872276   61989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:36.717274   61070 start.go:364] duration metric: took 55.446152288s to acquireMachinesLock for "no-preload-674057"
	I0924 01:04:36.717335   61070 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:36.717344   61070 fix.go:54] fixHost starting: 
	I0924 01:04:36.717781   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:36.717821   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:36.739062   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0924 01:04:36.739602   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:36.740307   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:04:36.740366   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:36.740767   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:36.741058   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:36.741223   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:04:36.743313   61070 fix.go:112] recreateIfNeeded on no-preload-674057: state=Stopped err=<nil>
	I0924 01:04:36.743339   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	W0924 01:04:36.743512   61070 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:36.745694   61070 out.go:177] * Restarting existing kvm2 VM for "no-preload-674057" ...
	I0924 01:04:35.998933   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:35.998962   61989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:35.998983   61989 buildroot.go:174] setting up certificates
	I0924 01:04:35.998994   61989 provision.go:84] configureAuth start
	I0924 01:04:35.999005   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.999359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.002499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003027   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.003052   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003167   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.005508   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005773   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.005796   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005909   61989 provision.go:143] copyHostCerts
	I0924 01:04:36.005967   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:36.005986   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:36.006037   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:36.006129   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:36.006137   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:36.006156   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:36.006209   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:36.006216   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:36.006237   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:36.006310   61989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 01:04:36.084609   61989 provision.go:177] copyRemoteCerts
	I0924 01:04:36.084671   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:36.084698   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.087740   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088046   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.088075   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088278   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.088523   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.088716   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.088854   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.178597   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:36.202768   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:04:36.225933   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:36.250014   61989 provision.go:87] duration metric: took 251.005829ms to configureAuth
	I0924 01:04:36.250046   61989 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:36.250369   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:04:36.250453   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.253290   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.253912   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.253943   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.254242   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.254474   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254650   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254764   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.254958   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.255124   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.255138   61989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:36.472324   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:36.472381   61989 machine.go:96] duration metric: took 854.106776ms to provisionDockerMachine
	I0924 01:04:36.472401   61989 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 01:04:36.472419   61989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:36.472451   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.472814   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:36.472849   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.475567   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.475941   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.475969   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.476125   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.476403   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.476614   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.476831   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.562688   61989 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:36.566476   61989 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:36.566501   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:36.566561   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:36.566635   61989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:36.566724   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:36.576132   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:36.599696   61989 start.go:296] duration metric: took 127.276787ms for postStartSetup
	I0924 01:04:36.599738   61989 fix.go:56] duration metric: took 20.366477202s for fixHost
	I0924 01:04:36.599763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.603462   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.603836   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.603867   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.604057   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.604500   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604721   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604878   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.605041   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.605285   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.605303   61989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:36.717061   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139876.688490589
	
	I0924 01:04:36.717091   61989 fix.go:216] guest clock: 1727139876.688490589
	I0924 01:04:36.717102   61989 fix.go:229] Guest: 2024-09-24 01:04:36.688490589 +0000 UTC Remote: 2024-09-24 01:04:36.599742488 +0000 UTC m=+235.652611441 (delta=88.748101ms)
	I0924 01:04:36.717157   61989 fix.go:200] guest clock delta is within tolerance: 88.748101ms
	I0924 01:04:36.717165   61989 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 20.483937438s
	I0924 01:04:36.717199   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.717499   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.720466   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.720959   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.720986   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.721189   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721965   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.722073   61989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:36.722118   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.722187   61989 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:36.722215   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.725171   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725384   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725669   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.725694   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725858   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.725970   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.726016   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.726065   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726249   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.726254   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.726494   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726513   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.726657   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.727049   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.845385   61989 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:36.853307   61989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:37.001850   61989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:37.009873   61989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:37.009948   61989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:37.032269   61989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:37.032299   61989 start.go:495] detecting cgroup driver to use...
	I0924 01:04:37.032403   61989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:37.056250   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:37.072827   61989 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:37.072903   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:37.090639   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:37.107525   61989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:37.235495   61989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:37.410971   61989 docker.go:233] disabling docker service ...
	I0924 01:04:37.411034   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:37.427815   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:37.444121   61989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:37.568933   61989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:37.700008   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:37.715529   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:37.736908   61989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 01:04:37.736980   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.748540   61989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:37.748590   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.759301   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.771008   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.782080   61989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:37.793756   61989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:37.803444   61989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:37.803525   61989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:37.818012   61989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:37.829019   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:37.978885   61989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:38.086263   61989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:38.086353   61989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:38.093479   61989 start.go:563] Will wait 60s for crictl version
	I0924 01:04:38.093573   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:38.097486   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:38.138781   61989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:38.138872   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.166832   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.199764   61989 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 01:04:36.747491   61070 main.go:141] libmachine: (no-preload-674057) Calling .Start
	I0924 01:04:36.747705   61070 main.go:141] libmachine: (no-preload-674057) Ensuring networks are active...
	I0924 01:04:36.748694   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network default is active
	I0924 01:04:36.749079   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network mk-no-preload-674057 is active
	I0924 01:04:36.749656   61070 main.go:141] libmachine: (no-preload-674057) Getting domain xml...
	I0924 01:04:36.750535   61070 main.go:141] libmachine: (no-preload-674057) Creating domain...
	I0924 01:04:38.122450   61070 main.go:141] libmachine: (no-preload-674057) Waiting to get IP...
	I0924 01:04:38.123578   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.124107   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.124173   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.124079   63121 retry.go:31] will retry after 227.552582ms: waiting for machine to come up
	I0924 01:04:38.353724   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.354145   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.354169   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.354102   63121 retry.go:31] will retry after 322.483933ms: waiting for machine to come up
	I0924 01:04:38.678600   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.679091   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.679120   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.679041   63121 retry.go:31] will retry after 301.71366ms: waiting for machine to come up
	I0924 01:04:34.875511   61699 addons.go:510] duration metric: took 1.43884954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:04:35.671396   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:38.169131   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:35.907681   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.408396   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.201359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:38.204699   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205122   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:38.205152   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205408   61989 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:38.209456   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:38.222128   61989 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:38.222254   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:04:38.222300   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:38.276802   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:38.276864   61989 ssh_runner.go:195] Run: which lz4
	I0924 01:04:38.280989   61989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:38.285108   61989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:38.285138   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 01:04:39.903777   61989 crio.go:462] duration metric: took 1.62282331s to copy over tarball
	I0924 01:04:39.903900   61989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:38.982586   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.983239   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.983283   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.983219   63121 retry.go:31] will retry after 402.217062ms: waiting for machine to come up
	I0924 01:04:39.386903   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:39.387550   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:39.387578   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:39.387483   63121 retry.go:31] will retry after 734.565994ms: waiting for machine to come up
	I0924 01:04:40.123444   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.123910   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.123940   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.123870   63121 retry.go:31] will retry after 704.281941ms: waiting for machine to come up
	I0924 01:04:40.829666   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.830217   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.830275   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.830209   63121 retry.go:31] will retry after 1.068502434s: waiting for machine to come up
	I0924 01:04:41.900192   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:41.900739   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:41.900765   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:41.900691   63121 retry.go:31] will retry after 1.087234201s: waiting for machine to come up
	I0924 01:04:42.989622   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:42.990089   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:42.990117   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:42.990036   63121 retry.go:31] will retry after 1.269273138s: waiting for machine to come up
	I0924 01:04:39.168613   61699 node_ready.go:49] node "default-k8s-diff-port-465341" has status "Ready":"True"
	I0924 01:04:39.168638   61699 node_ready.go:38] duration metric: took 5.504799687s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:39.168650   61699 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:39.175830   61699 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182016   61699 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.182040   61699 pod_ready.go:82] duration metric: took 6.182193ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182052   61699 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188162   61699 pod_ready.go:93] pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.188191   61699 pod_ready.go:82] duration metric: took 6.130794ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188201   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196197   61699 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.196225   61699 pod_ready.go:82] duration metric: took 8.016123ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196238   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703747   61699 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.703776   61699 pod_ready.go:82] duration metric: took 1.507528182s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703791   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771262   61699 pod_ready.go:93] pod "kube-proxy-nf8mp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.771293   61699 pod_ready.go:82] duration metric: took 67.494606ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771307   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:42.778933   61699 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:40.908876   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:43.409650   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:42.944929   61989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040984911s)
	I0924 01:04:42.944969   61989 crio.go:469] duration metric: took 3.041152253s to extract the tarball
	I0924 01:04:42.944981   61989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:42.988315   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:43.036011   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:43.036045   61989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:43.036151   61989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.036194   61989 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 01:04:43.036211   61989 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.036281   61989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.036301   61989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.036344   61989 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.036310   61989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.036577   61989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038440   61989 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.038458   61989 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 01:04:43.038482   61989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.038502   61989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.038554   61989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.038588   61989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.038600   61989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038816   61989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.306768   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.309660   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 01:04:43.312684   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.314551   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.317719   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.326063   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.378736   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.405508   61989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 01:04:43.405585   61989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.405648   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.452908   61989 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 01:04:43.452954   61989 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 01:04:43.453006   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471293   61989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 01:04:43.471341   61989 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 01:04:43.471347   61989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.471370   61989 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.471297   61989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 01:04:43.471406   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471421   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471423   61989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.471462   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.494687   61989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 01:04:43.494735   61989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.494782   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508206   61989 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 01:04:43.508253   61989 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.508278   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.508298   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508363   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.508419   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.508451   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.508487   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.508547   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.645995   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.646039   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.646098   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.646152   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.646261   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.646337   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.646413   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817326   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.817416   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.817381   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.817508   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817449   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.817597   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.817686   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.972782   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.972792   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 01:04:43.972869   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 01:04:43.972838   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 01:04:43.972928   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 01:04:43.972944   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 01:04:43.973027   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 01:04:44.008191   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 01:04:44.220628   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:44.364297   61989 cache_images.go:92] duration metric: took 1.328227964s to LoadCachedImages
	W0924 01:04:44.364505   61989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0924 01:04:44.364539   61989 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 01:04:44.364681   61989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:44.364824   61989 ssh_runner.go:195] Run: crio config
	I0924 01:04:44.423360   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:04:44.423382   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:44.423393   61989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:44.423412   61989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:04:44.423593   61989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:44.423671   61989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:04:44.434069   61989 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:44.434143   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:44.443807   61989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 01:04:44.463473   61989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:44.480449   61989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 01:04:44.498520   61989 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:44.503034   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:44.516699   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:44.643090   61989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:44.660194   61989 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 01:04:44.660216   61989 certs.go:194] generating shared ca certs ...
	I0924 01:04:44.660234   61989 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:44.660454   61989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:44.660542   61989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:44.660559   61989 certs.go:256] generating profile certs ...
	I0924 01:04:44.660682   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 01:04:44.660755   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 01:04:44.660816   61989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 01:04:44.660976   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:44.661014   61989 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:44.661026   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:44.661071   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:44.661104   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:44.661133   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:44.661211   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:44.662130   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:44.710279   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:44.736824   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:44.773120   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:44.801137   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:04:44.844946   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:44.880871   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:44.908630   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:44.947148   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:44.971925   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:45.000519   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:45.034167   61989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:45.054932   61989 ssh_runner.go:195] Run: openssl version
	I0924 01:04:45.062733   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:45.076993   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082104   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082175   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.088219   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:45.099211   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:45.111178   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116551   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116624   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.122353   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:45.133490   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:45.144123   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150437   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150498   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.157127   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:45.168217   61989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:45.172865   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:45.179177   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:45.184987   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:45.190927   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:45.197134   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:45.203170   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:45.209550   61989 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:45.209721   61989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:45.209778   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.247564   61989 cri.go:89] found id: ""
	I0924 01:04:45.247635   61989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:45.258171   61989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:45.258195   61989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:45.258269   61989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:45.268247   61989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:45.269656   61989 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:45.270486   61989 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171598" cluster setting kubeconfig missing "old-k8s-version-171598" context setting]
	I0924 01:04:45.271918   61989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:45.277260   61989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:45.287239   61989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.3
	I0924 01:04:45.287271   61989 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:45.287281   61989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:45.287325   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.327991   61989 cri.go:89] found id: ""
	I0924 01:04:45.328071   61989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:45.344693   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:45.354414   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:45.354439   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:45.354499   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:45.363765   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:45.363838   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:45.373569   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:45.382401   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:45.382464   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:45.392710   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.402855   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:45.402919   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.413651   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:45.423818   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:45.423873   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:45.434138   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:45.444119   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:45.582409   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:44.261681   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:44.262330   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:44.262360   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:44.262274   63121 retry.go:31] will retry after 1.755704993s: waiting for machine to come up
	I0924 01:04:46.019761   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:46.020213   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:46.020242   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:46.020155   63121 retry.go:31] will retry after 2.038509067s: waiting for machine to come up
	I0924 01:04:48.060649   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:48.061170   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:48.061201   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:48.061122   63121 retry.go:31] will retry after 2.834284151s: waiting for machine to come up
	I0924 01:04:45.021172   61699 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:45.021200   61699 pod_ready.go:82] duration metric: took 4.249884358s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:45.021213   61699 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:47.028860   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:45.908530   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:48.407714   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:46.245754   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.511218   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.608877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.722521   61989 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:46.722607   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.222945   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.723437   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.223704   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.723517   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.223744   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.722691   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.222927   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.723331   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.897541   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:50.898047   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:50.898093   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:50.898018   63121 retry.go:31] will retry after 4.166792416s: waiting for machine to come up
	I0924 01:04:49.530215   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.027812   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:50.907425   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.907568   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:54.908623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:51.223525   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.722715   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.223281   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.723378   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.222798   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.722883   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.223279   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.723155   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.222994   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.723628   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.068642   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069305   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has current primary IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069330   61070 main.go:141] libmachine: (no-preload-674057) Found IP for machine: 192.168.50.161
	I0924 01:04:55.069339   61070 main.go:141] libmachine: (no-preload-674057) Reserving static IP address...
	I0924 01:04:55.070035   61070 main.go:141] libmachine: (no-preload-674057) Reserved static IP address: 192.168.50.161
	I0924 01:04:55.070065   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.070073   61070 main.go:141] libmachine: (no-preload-674057) Waiting for SSH to be available...
	I0924 01:04:55.070090   61070 main.go:141] libmachine: (no-preload-674057) DBG | skip adding static IP to network mk-no-preload-674057 - found existing host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"}
	I0924 01:04:55.070095   61070 main.go:141] libmachine: (no-preload-674057) DBG | Getting to WaitForSSH function...
	I0924 01:04:55.072715   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073106   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.073140   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073351   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH client type: external
	I0924 01:04:55.073379   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa (-rw-------)
	I0924 01:04:55.073405   61070 main.go:141] libmachine: (no-preload-674057) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:55.073444   61070 main.go:141] libmachine: (no-preload-674057) DBG | About to run SSH command:
	I0924 01:04:55.073462   61070 main.go:141] libmachine: (no-preload-674057) DBG | exit 0
	I0924 01:04:55.200585   61070 main.go:141] libmachine: (no-preload-674057) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:55.200980   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetConfigRaw
	I0924 01:04:55.201650   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.204919   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205340   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.205360   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205638   61070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 01:04:55.205881   61070 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:55.205903   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:55.206124   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.208572   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209012   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.209037   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209218   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.209499   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209693   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209832   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.210010   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.210249   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.210263   61070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:55.317027   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:55.317067   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317403   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:04:55.317441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317700   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.320886   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321301   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.321330   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321443   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.321643   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.321853   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.322010   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.322169   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.322343   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.322360   61070 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-674057 && echo "no-preload-674057" | sudo tee /etc/hostname
	I0924 01:04:55.439098   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-674057
	
	I0924 01:04:55.439134   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.441909   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442212   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.442256   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442430   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.442667   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.442890   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.443078   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.443301   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.443460   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.443474   61070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-674057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-674057/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-674057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:55.558172   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:55.558204   61070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:55.558225   61070 buildroot.go:174] setting up certificates
	I0924 01:04:55.558236   61070 provision.go:84] configureAuth start
	I0924 01:04:55.558248   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.558574   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.561503   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.561891   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.561917   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.562089   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.564426   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564800   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.564825   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564958   61070 provision.go:143] copyHostCerts
	I0924 01:04:55.565009   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:55.565018   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:55.565074   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:55.565167   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:55.565175   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:55.565194   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:55.565253   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:55.565263   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:55.565285   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:55.565372   61070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.no-preload-674057 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-674057]
	I0924 01:04:55.649690   61070 provision.go:177] copyRemoteCerts
	I0924 01:04:55.649750   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:55.649774   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.652790   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653249   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.653278   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653567   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.653772   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.653936   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.654059   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:55.738522   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:55.764045   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:04:55.788225   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:55.811207   61070 provision.go:87] duration metric: took 252.958643ms to configureAuth
	I0924 01:04:55.811233   61070 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:55.811415   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:55.811503   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.814921   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815366   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.815400   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815597   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.815826   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816039   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816212   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.816496   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.816740   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.816756   61070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:56.045600   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:56.045632   61070 machine.go:96] duration metric: took 839.736907ms to provisionDockerMachine
	I0924 01:04:56.045646   61070 start.go:293] postStartSetup for "no-preload-674057" (driver="kvm2")
	I0924 01:04:56.045660   61070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:56.045679   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.045997   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:56.046027   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.049081   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049522   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.049559   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049743   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.049960   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.050105   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.050245   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.136652   61070 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:56.140894   61070 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:56.140920   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:56.140987   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:56.141071   61070 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:56.141161   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:56.151170   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:56.179268   61070 start.go:296] duration metric: took 133.605527ms for postStartSetup
	I0924 01:04:56.179318   61070 fix.go:56] duration metric: took 19.461975001s for fixHost
	I0924 01:04:56.179344   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.182567   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.182902   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.182927   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.183091   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.183320   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183562   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183720   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.183865   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:56.184036   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:56.184045   61070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:56.289079   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139896.261476318
	
	I0924 01:04:56.289113   61070 fix.go:216] guest clock: 1727139896.261476318
	I0924 01:04:56.289121   61070 fix.go:229] Guest: 2024-09-24 01:04:56.261476318 +0000 UTC Remote: 2024-09-24 01:04:56.17932382 +0000 UTC m=+357.500342999 (delta=82.152498ms)
	I0924 01:04:56.289141   61070 fix.go:200] guest clock delta is within tolerance: 82.152498ms
	I0924 01:04:56.289156   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 19.57184993s
	I0924 01:04:56.289175   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.289441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:56.292799   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293122   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.293148   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293327   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293832   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293990   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.294073   61070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:56.294108   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.294271   61070 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:56.294299   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.296962   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297113   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297300   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297325   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297473   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297504   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297526   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297665   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297737   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297858   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297926   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.297968   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.298044   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.298139   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.373014   61070 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:56.412487   61070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:56.558755   61070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:56.565187   61070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:56.565245   61070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:56.582073   61070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:56.582102   61070 start.go:495] detecting cgroup driver to use...
	I0924 01:04:56.582167   61070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:56.597553   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:56.612515   61070 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:56.612564   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:56.627596   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:56.641619   61070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:56.762636   61070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:56.917742   61070 docker.go:233] disabling docker service ...
	I0924 01:04:56.917821   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:56.934585   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:56.949194   61070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:57.085465   61070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:57.230529   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:57.245369   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:57.265137   61070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:57.265196   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.276878   61070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:57.276936   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.288934   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.300690   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.312392   61070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:57.324491   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.335619   61070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.352868   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.363280   61070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:57.372811   61070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:57.372866   61070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:57.385797   61070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:57.395936   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:57.532086   61070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:57.628275   61070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:57.628370   61070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:57.633679   61070 start.go:563] Will wait 60s for crictl version
	I0924 01:04:57.633761   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:57.637574   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:57.679667   61070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:57.679756   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.707710   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.738651   61070 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:57.740120   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:57.743379   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.743783   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:57.743814   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.744048   61070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:57.748516   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:57.762723   61070 kubeadm.go:883] updating cluster {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:57.762864   61070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:57.762906   61070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:57.798232   61070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:57.798260   61070 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:57.798334   61070 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.798357   61070 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.798377   61070 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:57.798340   61070 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.798397   61070 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.798381   61070 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.799819   61070 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.799826   61070 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.799840   61070 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799893   61070 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 01:04:57.799902   61070 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.799903   61070 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.027261   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.028437   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 01:04:58.051940   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.082860   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.088073   61070 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 01:04:58.088121   61070 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.088190   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.095081   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.098388   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.152389   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.190893   61070 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 01:04:58.190920   61070 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 01:04:58.190934   61070 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.190944   61070 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.190984   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191029   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.190988   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191080   61070 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 01:04:58.191109   61070 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.191134   61070 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 01:04:58.191144   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191157   61070 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.191185   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219642   61070 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 01:04:58.219689   61070 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.219703   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.219729   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219741   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.219745   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.250341   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.250394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.320188   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.320222   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.320308   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.320394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.383126   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.383327   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.453833   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.453918   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.453878   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.453923   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.499994   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.500027   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 01:04:58.500119   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.583372   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 01:04:58.583491   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:04:58.586213   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 01:04:58.586281   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.586325   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:04:58.586328   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 01:04:58.586405   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:04:58.616022   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 01:04:58.616061   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 01:04:58.616082   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.616118   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 01:04:58.616131   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:04:58.616180   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 01:04:58.616128   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.647507   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 01:04:58.647576   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 01:04:58.647620   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 01:04:58.647659   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:04:54.527399   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.028355   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.407381   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:59.908596   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:56.222908   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.722701   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.222762   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.722814   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.222671   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.722746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.222961   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.723335   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.223393   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.722739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.003431   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815541   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.199297236s)
	I0924 01:05:00.815566   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.167859705s)
	I0924 01:05:00.815579   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 01:05:00.815599   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 01:05:00.815619   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815625   61070 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812143064s)
	I0924 01:05:00.815674   61070 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 01:05:00.815687   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815710   61070 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815750   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:05:02.782328   61070 ssh_runner.go:235] Completed: which crictl: (1.966554191s)
	I0924 01:05:02.782392   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.966688239s)
	I0924 01:05:02.782421   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 01:05:02.782445   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782497   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782404   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:59.529167   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.531324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.028305   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:02.407051   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.475255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.222765   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.722729   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.223407   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.722799   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.223381   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.723427   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.223157   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.723069   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.223400   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.723739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.773493   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.990910382s)
	I0924 01:05:04.773540   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.99101415s)
	I0924 01:05:04.773560   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 01:05:04.773577   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:04.773584   61070 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:04.773615   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:08.061466   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.287832238s)
	I0924 01:05:08.061499   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 01:05:08.061510   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.287911454s)
	I0924 01:05:08.061595   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:08.061520   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:08.061690   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:06.029255   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.527617   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.907268   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.907464   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.223395   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.723345   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.222965   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.722795   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.222933   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.723687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.223526   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.723684   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.223275   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.723534   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.041517   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.979809714s)
	I0924 01:05:10.041549   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 01:05:10.041577   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.979956931s)
	I0924 01:05:10.041625   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 01:05:10.041582   61070 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041714   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041727   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005649   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.963906504s)
	I0924 01:05:12.005689   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 01:05:12.005696   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963951454s)
	I0924 01:05:12.005720   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 01:05:12.005727   61070 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005768   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.960728   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 01:05:12.960771   61070 cache_images.go:123] Successfully loaded all cached images
	I0924 01:05:12.960778   61070 cache_images.go:92] duration metric: took 15.162496206s to LoadCachedImages
	I0924 01:05:12.960791   61070 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.1 crio true true} ...
	I0924 01:05:12.960931   61070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-674057 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:05:12.961013   61070 ssh_runner.go:195] Run: crio config
	I0924 01:05:13.006511   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:13.006535   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:13.006551   61070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:05:13.006579   61070 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-674057 NodeName:no-preload-674057 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:05:13.006729   61070 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-674057"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:05:13.006799   61070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:05:13.017598   61070 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:05:13.017672   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:05:13.027414   61070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 01:05:13.044688   61070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:05:13.061646   61070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 01:05:13.079552   61070 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0924 01:05:13.083172   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:05:13.095232   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:05:13.207184   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:05:13.222851   61070 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057 for IP: 192.168.50.161
	I0924 01:05:13.222880   61070 certs.go:194] generating shared ca certs ...
	I0924 01:05:13.222901   61070 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:05:13.223084   61070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:05:13.223184   61070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:05:13.223195   61070 certs.go:256] generating profile certs ...
	I0924 01:05:13.223314   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.key
	I0924 01:05:13.223394   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key.8fa8fb95
	I0924 01:05:13.223445   61070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key
	I0924 01:05:13.223614   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:05:13.223654   61070 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:05:13.223710   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:05:13.223756   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:05:13.223785   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:05:13.223818   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:05:13.223862   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:05:13.224549   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:05:13.273224   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:05:13.311069   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:05:13.342314   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:05:13.369345   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:05:13.395466   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:05:13.424307   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:05:13.448531   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:05:13.472491   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:05:13.496060   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:05:13.521182   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:05:13.548194   61070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:05:13.566423   61070 ssh_runner.go:195] Run: openssl version
	I0924 01:05:13.572605   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:05:13.583991   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588705   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588771   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.594828   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:05:13.606168   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:05:13.617723   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622697   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622762   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.628486   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:05:13.639176   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:05:13.650161   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654546   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654625   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.660382   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:05:13.671487   61070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:05:13.676226   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:05:13.682591   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:05:13.688492   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:05:13.694726   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:05:13.700432   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:05:13.706080   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:05:13.712226   61070 kubeadm.go:392] StartCluster: {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:05:13.712323   61070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:05:13.712421   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:11.028779   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.527996   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:10.908227   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.408515   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:11.223272   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.723442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.223301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.723151   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.223174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.722780   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.222777   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.722987   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.223654   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.723449   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.757518   61070 cri.go:89] found id: ""
	I0924 01:05:13.757597   61070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:05:13.768318   61070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:05:13.768367   61070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:05:13.768416   61070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:05:13.778918   61070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:05:13.780385   61070 kubeconfig.go:125] found "no-preload-674057" server: "https://192.168.50.161:8443"
	I0924 01:05:13.783392   61070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:05:13.794016   61070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0924 01:05:13.794050   61070 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:05:13.794085   61070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:05:13.794150   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:13.833511   61070 cri.go:89] found id: ""
	I0924 01:05:13.833596   61070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:05:13.851608   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:05:13.861469   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:05:13.861510   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:05:13.861552   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:05:13.870700   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:05:13.870770   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:05:13.880613   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:05:13.890336   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:05:13.890404   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:05:13.900172   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.910408   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:05:13.910475   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.919980   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:05:13.929398   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:05:13.929495   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:05:13.938894   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:05:13.948749   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:14.056463   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.345268   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.288763261s)
	I0924 01:05:15.345317   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.555986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.626986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.697665   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:05:15.697761   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.198410   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.698860   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.715727   61070 api_server.go:72] duration metric: took 1.018058771s to wait for apiserver process to appear ...
	I0924 01:05:16.715756   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:05:16.715779   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:15.528157   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.528680   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:15.906930   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.907223   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:16.223623   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.723625   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.223541   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.722702   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.222919   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.722982   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.222978   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.723547   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.223112   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.723562   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.716809   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:21.716852   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:19.528769   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.028695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:20.406693   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.407036   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:24.906735   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:21.223058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.722680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.223693   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.722716   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.223387   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.722910   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.223608   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.723144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.223442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.723025   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.717768   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:26.717811   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:24.527568   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.527806   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.028455   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:27.406994   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.906590   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.723271   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.223163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.723283   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.723174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.222803   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.723029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.223679   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.723058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.718277   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:31.718317   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:31.028690   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:33.527675   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.906723   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:34.406306   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.223465   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.723438   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.223673   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.722674   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.223289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.723651   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.223014   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.723518   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.222860   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.723642   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.718676   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:36.718716   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.146737   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": read tcp 192.168.50.1:59880->192.168.50.161:8443: read: connection reset by peer
	I0924 01:05:37.215865   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.216506   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:37.716052   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.716731   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:38.216296   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:36.028537   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.032544   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.406928   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.407201   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.222680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.723015   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.222736   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.723185   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.223070   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.723237   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.223640   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.723622   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.222705   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.722909   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.217518   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:43.217557   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:40.527577   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:43.027715   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:40.906522   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:42.906906   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:44.907623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:41.223105   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.723166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.223286   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.723048   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.223278   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.723301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.222712   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.723191   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.223720   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.723044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:48.217915   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:48.217982   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:45.028780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.028883   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.406680   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:49.907776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:46.223270   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.722902   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:46.722980   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:46.781519   61989 cri.go:89] found id: ""
	I0924 01:05:46.781551   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.781565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:46.781574   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:46.781630   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:46.815990   61989 cri.go:89] found id: ""
	I0924 01:05:46.816021   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.816030   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:46.816035   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:46.816082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:46.848951   61989 cri.go:89] found id: ""
	I0924 01:05:46.848980   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.848989   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:46.848995   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:46.849062   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:46.880731   61989 cri.go:89] found id: ""
	I0924 01:05:46.880756   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.880764   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:46.880770   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:46.880832   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:46.915975   61989 cri.go:89] found id: ""
	I0924 01:05:46.916004   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.916014   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:46.916036   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:46.916105   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:46.954124   61989 cri.go:89] found id: ""
	I0924 01:05:46.954154   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.954162   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:46.954168   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:46.954233   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:46.990454   61989 cri.go:89] found id: ""
	I0924 01:05:46.990489   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.990498   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:46.990504   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:46.990573   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:47.024099   61989 cri.go:89] found id: ""
	I0924 01:05:47.024137   61989 logs.go:276] 0 containers: []
	W0924 01:05:47.024150   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:47.024161   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:47.024176   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:47.153050   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:47.153076   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:47.153109   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:47.223472   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:47.223511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:47.267699   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:47.267729   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:47.314741   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:47.314773   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:49.828972   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:49.842301   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:49.842378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:49.874632   61989 cri.go:89] found id: ""
	I0924 01:05:49.874659   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.874669   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:49.874676   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:49.874734   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:49.912500   61989 cri.go:89] found id: ""
	I0924 01:05:49.912524   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.912532   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:49.912543   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:49.912592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:49.947297   61989 cri.go:89] found id: ""
	I0924 01:05:49.947320   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.947328   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:49.947334   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:49.947395   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:49.983863   61989 cri.go:89] found id: ""
	I0924 01:05:49.983892   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.983905   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:49.983915   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:49.983977   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:50.022997   61989 cri.go:89] found id: ""
	I0924 01:05:50.023031   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.023044   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:50.023053   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:50.023109   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:50.057829   61989 cri.go:89] found id: ""
	I0924 01:05:50.057863   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.057875   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:50.057882   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:50.057929   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:50.114599   61989 cri.go:89] found id: ""
	I0924 01:05:50.114620   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.114628   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:50.114633   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:50.114677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:50.147294   61989 cri.go:89] found id: ""
	I0924 01:05:50.147326   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.147334   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:50.147345   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:50.147378   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:50.198362   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:50.198402   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:50.212381   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:50.212415   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:50.286216   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:50.286261   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:50.286279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:50.366794   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:50.366827   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:53.218617   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:53.218653   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:49.527980   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.027425   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.027780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:51.908078   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.406891   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.908167   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:52.922279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:52.922353   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:52.956677   61989 cri.go:89] found id: ""
	I0924 01:05:52.956708   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.956720   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:52.956727   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:52.956778   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:52.990933   61989 cri.go:89] found id: ""
	I0924 01:05:52.990956   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.990964   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:52.990970   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:52.991019   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:53.025729   61989 cri.go:89] found id: ""
	I0924 01:05:53.025758   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.025768   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:53.025778   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:53.025838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:53.060238   61989 cri.go:89] found id: ""
	I0924 01:05:53.060269   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.060279   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:53.060287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:53.060366   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:53.094166   61989 cri.go:89] found id: ""
	I0924 01:05:53.094200   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.094212   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:53.094220   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:53.094289   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:53.129857   61989 cri.go:89] found id: ""
	I0924 01:05:53.129884   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.129892   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:53.129898   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:53.129955   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:53.165857   61989 cri.go:89] found id: ""
	I0924 01:05:53.165890   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.165898   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:53.165909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:53.165970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:53.203884   61989 cri.go:89] found id: ""
	I0924 01:05:53.203909   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.203917   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:53.203926   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:53.203937   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:53.258001   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:53.258035   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:53.271584   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:53.271620   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:53.341791   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:53.341811   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:53.341824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:53.424126   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:53.424170   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:55.962067   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:55.977964   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:55.978042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:56.277329   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:05:56.277366   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:05:56.277385   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.302576   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.302628   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:56.715873   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.722458   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.722487   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.216714   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.224426   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:57.224474   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.715976   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.725067   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:05:57.734749   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:05:57.734782   61070 api_server.go:131] duration metric: took 41.019017744s to wait for apiserver health ...
	I0924 01:05:57.734793   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:57.734801   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:57.736798   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:05:57.738285   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:05:57.750654   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:05:57.778587   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:05:57.804858   61070 system_pods.go:59] 8 kube-system pods found
	I0924 01:05:57.804907   61070 system_pods.go:61] "coredns-7c65d6cfc9-kshwz" [4393c6ec-abd9-42ce-af67-9e8b768bd49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:05:57.804917   61070 system_pods.go:61] "etcd-no-preload-674057" [65cf3acb-8ffa-4f83-8ab9-86ddefc5d829] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:05:57.804932   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [7d26a065-faa1-4ba2-96b7-6c9b1ccb5386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:05:57.804940   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [7c5c6602-1749-4f34-bb63-08161baac6db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:05:57.804949   61070 system_pods.go:61] "kube-proxy-fgmwc" [a81419dc-54f5-4bdd-ac2d-f3f7c85b8f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 01:05:57.804955   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [d02c8d9a-1897-4506-8029-9608f11520de] Running
	I0924 01:05:57.804965   61070 system_pods.go:61] "metrics-server-6867b74b74-7gbnr" [6ffa0eb7-21d8-4741-9eae-ce7bb9604dec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:05:57.804975   61070 system_pods.go:61] "storage-provisioner" [a7f99914-8945-4614-afef-d553ea932edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 01:05:57.804984   61070 system_pods.go:74] duration metric: took 26.369156ms to wait for pod list to return data ...
	I0924 01:05:57.804996   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:05:57.809068   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:05:57.809103   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:05:57.809119   61070 node_conditions.go:105] duration metric: took 4.115654ms to run NodePressure ...
	I0924 01:05:57.809137   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:58.173276   61070 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178398   61070 kubeadm.go:739] kubelet initialised
	I0924 01:05:58.178422   61070 kubeadm.go:740] duration metric: took 5.118555ms waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178429   61070 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:05:58.183646   61070 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:05:56.029030   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.029256   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.407889   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.907744   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.014681   61989 cri.go:89] found id: ""
	I0924 01:05:56.014716   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.014728   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:56.014736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:56.014799   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:56.062547   61989 cri.go:89] found id: ""
	I0924 01:05:56.062576   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.062587   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:56.062606   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:56.062665   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:56.100938   61989 cri.go:89] found id: ""
	I0924 01:05:56.100960   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.100969   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:56.100974   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:56.101039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:56.137694   61989 cri.go:89] found id: ""
	I0924 01:05:56.137722   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.137737   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:56.137744   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:56.137803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:56.174876   61989 cri.go:89] found id: ""
	I0924 01:05:56.174911   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.174923   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:56.174931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:56.174990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:56.208870   61989 cri.go:89] found id: ""
	I0924 01:05:56.208895   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.208905   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:56.208913   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:56.208971   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:56.242476   61989 cri.go:89] found id: ""
	I0924 01:05:56.242508   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.242520   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:56.242528   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:56.242590   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:56.276185   61989 cri.go:89] found id: ""
	I0924 01:05:56.276214   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.276255   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:56.276267   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:56.276284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:56.332755   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:56.332792   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:56.346279   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:56.346312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:56.419725   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:56.419751   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:56.419766   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:56.500173   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:56.500208   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.083761   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:59.097184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:59.097247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:59.131734   61989 cri.go:89] found id: ""
	I0924 01:05:59.131764   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.131775   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:59.131782   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:59.131842   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:59.169402   61989 cri.go:89] found id: ""
	I0924 01:05:59.169429   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.169439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:59.169446   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:59.169521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:59.208235   61989 cri.go:89] found id: ""
	I0924 01:05:59.208260   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.208290   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:59.208298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:59.208372   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:59.242314   61989 cri.go:89] found id: ""
	I0924 01:05:59.242345   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.242358   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:59.242367   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:59.242433   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:59.281300   61989 cri.go:89] found id: ""
	I0924 01:05:59.281327   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.281337   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:59.281344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:59.281407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:59.315336   61989 cri.go:89] found id: ""
	I0924 01:05:59.315369   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.315377   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:59.315386   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:59.315445   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:59.347678   61989 cri.go:89] found id: ""
	I0924 01:05:59.347708   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.347718   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:59.347726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:59.347786   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:59.381296   61989 cri.go:89] found id: ""
	I0924 01:05:59.381328   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.381340   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:59.381352   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:59.381369   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:59.462939   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:59.462971   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:59.462990   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:59.544967   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:59.545004   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.585079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:59.585106   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:59.637897   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:59.637940   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:00.190924   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.192627   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.192648   61070 pod_ready.go:82] duration metric: took 4.008971718s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.192658   61070 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198586   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.198614   61070 pod_ready.go:82] duration metric: took 5.949433ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198627   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205306   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:03.205331   61070 pod_ready.go:82] duration metric: took 1.006696778s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205342   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:00.528770   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.529473   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:01.406620   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:03.407024   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.153289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:02.170582   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:02.170679   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:02.216700   61989 cri.go:89] found id: ""
	I0924 01:06:02.216722   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.216730   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:02.216736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:02.216793   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:02.292664   61989 cri.go:89] found id: ""
	I0924 01:06:02.292695   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.292706   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:02.292714   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:02.292780   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:02.349447   61989 cri.go:89] found id: ""
	I0924 01:06:02.349470   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.349481   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:02.349487   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:02.349557   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:02.390491   61989 cri.go:89] found id: ""
	I0924 01:06:02.390514   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.390535   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:02.390543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:02.390597   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:02.439330   61989 cri.go:89] found id: ""
	I0924 01:06:02.439355   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.439366   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:02.439373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:02.439432   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:02.476400   61989 cri.go:89] found id: ""
	I0924 01:06:02.476431   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.476439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:02.476445   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:02.476501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:02.511946   61989 cri.go:89] found id: ""
	I0924 01:06:02.511975   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.511983   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:02.511989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:02.512036   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:02.547526   61989 cri.go:89] found id: ""
	I0924 01:06:02.547554   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.547561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:02.547570   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:02.547580   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:02.619784   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:02.619805   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:02.619816   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:02.698597   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:02.698636   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:02.741381   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:02.741419   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:02.797965   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:02.798023   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.312059   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:05.326556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:05.326614   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:05.360973   61989 cri.go:89] found id: ""
	I0924 01:06:05.360999   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.361011   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:05.361018   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:05.361101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:05.394720   61989 cri.go:89] found id: ""
	I0924 01:06:05.394750   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.394760   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:05.394767   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:05.394831   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:05.432564   61989 cri.go:89] found id: ""
	I0924 01:06:05.432592   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.432603   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:05.432611   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:05.432673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:05.465424   61989 cri.go:89] found id: ""
	I0924 01:06:05.465467   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.465478   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:05.465484   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:05.465555   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:05.503656   61989 cri.go:89] found id: ""
	I0924 01:06:05.503684   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.503693   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:05.503699   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:05.503752   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:05.538128   61989 cri.go:89] found id: ""
	I0924 01:06:05.538160   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.538171   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:05.538179   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:05.538248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:05.571310   61989 cri.go:89] found id: ""
	I0924 01:06:05.571336   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.571346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:05.571353   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:05.571416   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:05.604038   61989 cri.go:89] found id: ""
	I0924 01:06:05.604062   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.604070   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:05.604079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:05.604094   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:05.657025   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:05.657068   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.671457   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:05.671483   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:05.747671   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:05.747701   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:05.747718   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:05.833248   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:05.833285   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:05.212622   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.711612   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.028130   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.527525   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.407057   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.407341   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.906549   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:08.372029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:08.386497   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:08.386564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:08.422998   61989 cri.go:89] found id: ""
	I0924 01:06:08.423029   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.423039   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:08.423047   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:08.423095   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:08.457009   61989 cri.go:89] found id: ""
	I0924 01:06:08.457037   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.457047   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:08.457052   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:08.457104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:08.489694   61989 cri.go:89] found id: ""
	I0924 01:06:08.489728   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.489740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:08.489750   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:08.489804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:08.521819   61989 cri.go:89] found id: ""
	I0924 01:06:08.521845   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.521856   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:08.521864   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:08.521922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:08.556422   61989 cri.go:89] found id: ""
	I0924 01:06:08.556453   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.556465   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:08.556472   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:08.556567   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:08.593802   61989 cri.go:89] found id: ""
	I0924 01:06:08.593828   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.593836   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:08.593842   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:08.593932   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:08.627569   61989 cri.go:89] found id: ""
	I0924 01:06:08.627592   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.627600   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:08.627605   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:08.627653   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:08.664728   61989 cri.go:89] found id: ""
	I0924 01:06:08.664758   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.664769   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:08.664780   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:08.664794   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.703546   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:08.703577   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:08.755612   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:08.755649   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:08.769957   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:08.769989   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:08.842732   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:08.842762   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:08.842789   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:10.211942   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.211973   61070 pod_ready.go:82] duration metric: took 7.006623705s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.211986   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217219   61070 pod_ready.go:93] pod "kube-proxy-fgmwc" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.217247   61070 pod_ready.go:82] duration metric: took 5.254551ms for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217260   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221959   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.221983   61070 pod_ready.go:82] duration metric: took 4.71607ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221996   61070 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:12.227911   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.527831   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.527917   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.028599   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.907394   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.407242   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.427424   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:11.440709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:11.440773   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:11.475537   61989 cri.go:89] found id: ""
	I0924 01:06:11.475564   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.475572   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:11.475577   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:11.475633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:11.512231   61989 cri.go:89] found id: ""
	I0924 01:06:11.512276   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.512285   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:11.512292   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:11.512365   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:11.549809   61989 cri.go:89] found id: ""
	I0924 01:06:11.549840   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.549852   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:11.549858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:11.549924   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:11.587451   61989 cri.go:89] found id: ""
	I0924 01:06:11.587481   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.587493   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:11.587500   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:11.587558   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:11.625109   61989 cri.go:89] found id: ""
	I0924 01:06:11.625135   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.625146   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:11.625154   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:11.625213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:11.660577   61989 cri.go:89] found id: ""
	I0924 01:06:11.660604   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.660616   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:11.660624   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:11.660683   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:11.703527   61989 cri.go:89] found id: ""
	I0924 01:06:11.703557   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.703569   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:11.703577   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:11.703646   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:11.740766   61989 cri.go:89] found id: ""
	I0924 01:06:11.740798   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.740810   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:11.740820   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:11.740836   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:11.803402   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:11.803448   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:11.819144   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:11.819178   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:11.896152   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:11.896173   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:11.896187   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.986284   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:11.986340   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.523669   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:14.537923   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:14.537990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:14.576092   61989 cri.go:89] found id: ""
	I0924 01:06:14.576128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.576140   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:14.576148   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:14.576213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:14.611985   61989 cri.go:89] found id: ""
	I0924 01:06:14.612020   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.612032   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:14.612039   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:14.612098   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:14.647640   61989 cri.go:89] found id: ""
	I0924 01:06:14.647667   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.647675   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:14.647682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:14.647746   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:14.685089   61989 cri.go:89] found id: ""
	I0924 01:06:14.685128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.685141   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:14.685150   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:14.685217   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:14.718694   61989 cri.go:89] found id: ""
	I0924 01:06:14.718729   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.718738   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:14.718745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:14.718810   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:14.754874   61989 cri.go:89] found id: ""
	I0924 01:06:14.754916   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.754928   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:14.754936   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:14.754993   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:14.789580   61989 cri.go:89] found id: ""
	I0924 01:06:14.789608   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.789617   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:14.789625   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:14.789677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:14.823173   61989 cri.go:89] found id: ""
	I0924 01:06:14.823201   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.823213   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:14.823224   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:14.823238   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:14.878398   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:14.878431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:14.892466   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:14.892502   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:14.965978   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:14.966010   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:14.966065   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:15.050557   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:15.050600   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.231644   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.728219   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.029325   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:18.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.907014   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:19.406893   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:17.596915   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:17.609585   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:17.609643   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:17.648275   61989 cri.go:89] found id: ""
	I0924 01:06:17.648305   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.648313   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:17.648319   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:17.648447   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:17.681447   61989 cri.go:89] found id: ""
	I0924 01:06:17.681473   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.681484   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:17.681491   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:17.681552   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:17.719202   61989 cri.go:89] found id: ""
	I0924 01:06:17.719226   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.719234   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:17.719240   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:17.719296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:17.752601   61989 cri.go:89] found id: ""
	I0924 01:06:17.752629   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.752641   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:17.752649   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:17.752700   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:17.789905   61989 cri.go:89] found id: ""
	I0924 01:06:17.789934   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.789945   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:17.789952   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:17.790015   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:17.824174   61989 cri.go:89] found id: ""
	I0924 01:06:17.824205   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.824217   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:17.824237   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:17.824296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:17.860647   61989 cri.go:89] found id: ""
	I0924 01:06:17.860674   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.860684   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:17.860691   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:17.860750   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:17.896392   61989 cri.go:89] found id: ""
	I0924 01:06:17.896414   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.896423   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:17.896437   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:17.896450   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:17.949230   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:17.949272   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:17.963125   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:17.963183   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:18.035092   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:18.035117   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:18.035134   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:18.117973   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:18.118011   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:20.657044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:20.669862   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:20.669936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:20.704672   61989 cri.go:89] found id: ""
	I0924 01:06:20.704703   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.704714   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:20.704722   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:20.704785   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:20.745777   61989 cri.go:89] found id: ""
	I0924 01:06:20.745801   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.745811   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:20.745818   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:20.745879   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:20.779673   61989 cri.go:89] found id: ""
	I0924 01:06:20.779704   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.779740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:20.779749   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:20.779809   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:20.815959   61989 cri.go:89] found id: ""
	I0924 01:06:20.815983   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.815992   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:20.815998   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:20.816055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:20.849203   61989 cri.go:89] found id: ""
	I0924 01:06:20.849232   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.849243   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:20.849251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:20.849319   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:20.884303   61989 cri.go:89] found id: ""
	I0924 01:06:20.884353   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.884365   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:20.884373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:20.884436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:20.921217   61989 cri.go:89] found id: ""
	I0924 01:06:20.921242   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.921249   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:20.921255   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:20.921302   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:20.957555   61989 cri.go:89] found id: ""
	I0924 01:06:20.957590   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.957601   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:20.957613   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:20.957628   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:20.972591   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:20.972630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:06:18.728553   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.730046   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.228040   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.527573   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:22.527695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:21.406963   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.907730   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	W0924 01:06:21.046506   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:21.046532   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:21.046547   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:21.129415   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:21.129453   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:21.168899   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:21.168924   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:23.720925   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:23.736893   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:23.736965   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:23.771874   61989 cri.go:89] found id: ""
	I0924 01:06:23.771901   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.771909   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:23.771915   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:23.771976   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:23.806892   61989 cri.go:89] found id: ""
	I0924 01:06:23.806924   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.806936   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:23.806943   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:23.806999   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:23.843661   61989 cri.go:89] found id: ""
	I0924 01:06:23.843686   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.843694   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:23.843700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:23.843753   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:23.878979   61989 cri.go:89] found id: ""
	I0924 01:06:23.879007   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.879019   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:23.879027   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:23.879086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:23.913893   61989 cri.go:89] found id: ""
	I0924 01:06:23.913916   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.913925   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:23.913937   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:23.913982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:23.947932   61989 cri.go:89] found id: ""
	I0924 01:06:23.947961   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.947972   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:23.947980   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:23.948045   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:23.981366   61989 cri.go:89] found id: ""
	I0924 01:06:23.981391   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.981402   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:23.981409   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:23.981467   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:24.014428   61989 cri.go:89] found id: ""
	I0924 01:06:24.014455   61989 logs.go:276] 0 containers: []
	W0924 01:06:24.014463   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:24.014471   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:24.014485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:24.029585   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:24.029621   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:24.095926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:24.095955   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:24.095975   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:24.174594   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:24.174635   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:24.213286   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:24.213311   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:25.229785   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.729021   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:25.027783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.030450   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.406776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:28.907135   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.764740   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:26.777184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:26.777279   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:26.812704   61989 cri.go:89] found id: ""
	I0924 01:06:26.812735   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.812746   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:26.812753   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:26.812811   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:26.849867   61989 cri.go:89] found id: ""
	I0924 01:06:26.849895   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.849904   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:26.849909   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:26.849958   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:26.882856   61989 cri.go:89] found id: ""
	I0924 01:06:26.882878   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.882885   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:26.882891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:26.882936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:26.921063   61989 cri.go:89] found id: ""
	I0924 01:06:26.921085   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.921094   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:26.921100   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:26.921156   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:26.961154   61989 cri.go:89] found id: ""
	I0924 01:06:26.961182   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.961194   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:26.961200   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:26.961257   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:26.994560   61989 cri.go:89] found id: ""
	I0924 01:06:26.994593   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.994603   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:26.994612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:26.994673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:27.027967   61989 cri.go:89] found id: ""
	I0924 01:06:27.028013   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.028026   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:27.028033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:27.028096   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:27.063099   61989 cri.go:89] found id: ""
	I0924 01:06:27.063130   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.063142   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:27.063153   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:27.063166   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:27.116237   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:27.116279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:27.130785   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:27.130815   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:27.201931   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:27.201954   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:27.201970   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:27.282182   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:27.282217   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:29.825403   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:29.838890   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:29.838989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:29.873651   61989 cri.go:89] found id: ""
	I0924 01:06:29.873678   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.873690   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:29.873698   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:29.873758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:29.909894   61989 cri.go:89] found id: ""
	I0924 01:06:29.909916   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.909923   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:29.909929   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:29.909978   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:29.944850   61989 cri.go:89] found id: ""
	I0924 01:06:29.944878   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.944886   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:29.944892   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:29.944945   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:29.981486   61989 cri.go:89] found id: ""
	I0924 01:06:29.981515   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.981524   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:29.981532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:29.981592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:30.015138   61989 cri.go:89] found id: ""
	I0924 01:06:30.015165   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.015176   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:30.015184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:30.015256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:30.051777   61989 cri.go:89] found id: ""
	I0924 01:06:30.051814   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.051825   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:30.051834   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:30.051898   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:30.085573   61989 cri.go:89] found id: ""
	I0924 01:06:30.085598   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.085607   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:30.085612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:30.085661   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:30.122518   61989 cri.go:89] found id: ""
	I0924 01:06:30.122551   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.122561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:30.122570   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:30.122585   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:30.199075   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:30.199118   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:30.238259   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:30.238293   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:30.292145   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:30.292185   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:30.306404   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:30.306431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:30.373959   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:29.729379   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.228691   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:29.527089   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:31.527523   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:34.027357   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:30.907575   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:33.407615   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.875041   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:32.888358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:32.888435   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:32.924466   61989 cri.go:89] found id: ""
	I0924 01:06:32.924499   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.924519   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:32.924528   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:32.924584   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:32.960188   61989 cri.go:89] found id: ""
	I0924 01:06:32.960216   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.960224   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:32.960231   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:32.960282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:32.997612   61989 cri.go:89] found id: ""
	I0924 01:06:32.997641   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.997649   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:32.997655   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:32.997704   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:33.034282   61989 cri.go:89] found id: ""
	I0924 01:06:33.034310   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.034317   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:33.034325   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:33.034381   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:33.073832   61989 cri.go:89] found id: ""
	I0924 01:06:33.073861   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.073870   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:33.073875   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:33.073959   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:33.107276   61989 cri.go:89] found id: ""
	I0924 01:06:33.107303   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.107314   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:33.107323   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:33.107373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:33.141062   61989 cri.go:89] found id: ""
	I0924 01:06:33.141091   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.141104   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:33.141112   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:33.141174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:33.177874   61989 cri.go:89] found id: ""
	I0924 01:06:33.177899   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.177908   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:33.177916   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:33.177927   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:33.228324   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:33.228373   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:33.241324   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:33.241350   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:33.313115   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:33.313139   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:33.313151   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:33.392458   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:33.392512   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:35.932822   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:35.945918   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:35.945987   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:34.727948   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.728560   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.028536   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:38.527308   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.906501   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:37.907165   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.984400   61989 cri.go:89] found id: ""
	I0924 01:06:35.984438   61989 logs.go:276] 0 containers: []
	W0924 01:06:35.984448   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:35.984456   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:35.984528   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:36.022208   61989 cri.go:89] found id: ""
	I0924 01:06:36.022235   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.022244   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:36.022252   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:36.022336   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:36.059153   61989 cri.go:89] found id: ""
	I0924 01:06:36.059176   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.059184   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:36.059190   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:36.059247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:36.094375   61989 cri.go:89] found id: ""
	I0924 01:06:36.094413   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.094425   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:36.094434   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:36.094490   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:36.128662   61989 cri.go:89] found id: ""
	I0924 01:06:36.128691   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.128702   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:36.128710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:36.128762   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:36.160898   61989 cri.go:89] found id: ""
	I0924 01:06:36.160925   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.160937   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:36.160945   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:36.161010   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:36.194421   61989 cri.go:89] found id: ""
	I0924 01:06:36.194448   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.194460   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:36.194468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:36.194537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:36.230448   61989 cri.go:89] found id: ""
	I0924 01:06:36.230477   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.230487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:36.230498   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:36.230511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:36.303029   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:36.303053   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:36.303067   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:36.406305   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:36.406338   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:36.444044   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:36.444084   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:36.494829   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:36.494873   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.009579   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:39.023867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:39.023943   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:39.057426   61989 cri.go:89] found id: ""
	I0924 01:06:39.057458   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.057469   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:39.057477   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:39.057539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:39.091421   61989 cri.go:89] found id: ""
	I0924 01:06:39.091444   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.091453   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:39.091459   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:39.091518   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:39.125407   61989 cri.go:89] found id: ""
	I0924 01:06:39.125437   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.125448   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:39.125455   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:39.125525   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:39.157146   61989 cri.go:89] found id: ""
	I0924 01:06:39.157170   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.157181   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:39.157189   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:39.157248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:39.189474   61989 cri.go:89] found id: ""
	I0924 01:06:39.189501   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.189511   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:39.189518   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:39.189577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:39.228034   61989 cri.go:89] found id: ""
	I0924 01:06:39.228063   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.228084   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:39.228099   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:39.228158   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:39.268289   61989 cri.go:89] found id: ""
	I0924 01:06:39.268317   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.268345   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:39.268354   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:39.268431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:39.304964   61989 cri.go:89] found id: ""
	I0924 01:06:39.304988   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.304996   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:39.305005   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:39.305017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:39.356193   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:39.356234   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.370782   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:39.370807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:39.442395   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:39.442418   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:39.442429   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:39.518426   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:39.518466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:38.729606   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:41.228528   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.528236   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:43.028285   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.407021   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.906884   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:44.907822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.059895   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:42.092776   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:42.092837   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:42.128508   61989 cri.go:89] found id: ""
	I0924 01:06:42.128534   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.128555   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:42.128565   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:42.128623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:42.160961   61989 cri.go:89] found id: ""
	I0924 01:06:42.160989   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.161000   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:42.161008   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:42.161072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:42.194212   61989 cri.go:89] found id: ""
	I0924 01:06:42.194260   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.194272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:42.194280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:42.194342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:42.229284   61989 cri.go:89] found id: ""
	I0924 01:06:42.229312   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.229323   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:42.229331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:42.229378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:42.261952   61989 cri.go:89] found id: ""
	I0924 01:06:42.261986   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.261997   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:42.262010   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:42.262059   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:42.297096   61989 cri.go:89] found id: ""
	I0924 01:06:42.297125   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.297133   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:42.297139   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:42.297185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:42.333066   61989 cri.go:89] found id: ""
	I0924 01:06:42.333095   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.333106   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:42.333114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:42.333176   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:42.366798   61989 cri.go:89] found id: ""
	I0924 01:06:42.366829   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.366840   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:42.366852   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:42.366865   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:42.419424   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:42.419466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:42.433814   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:42.433846   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:42.503817   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:42.503845   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:42.503860   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:42.583249   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:42.583289   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:45.123746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:45.136292   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:45.136377   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:45.174390   61989 cri.go:89] found id: ""
	I0924 01:06:45.174420   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.174441   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:45.174449   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:45.174539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:45.212394   61989 cri.go:89] found id: ""
	I0924 01:06:45.212422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.212433   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:45.212441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:45.212503   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:45.245831   61989 cri.go:89] found id: ""
	I0924 01:06:45.245853   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.245861   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:45.245867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:45.245922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:45.277587   61989 cri.go:89] found id: ""
	I0924 01:06:45.277615   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.277626   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:45.277634   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:45.277692   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:45.309715   61989 cri.go:89] found id: ""
	I0924 01:06:45.309749   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.309760   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:45.309768   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:45.309827   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:45.342799   61989 cri.go:89] found id: ""
	I0924 01:06:45.342831   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.342844   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:45.342853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:45.342921   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:45.375377   61989 cri.go:89] found id: ""
	I0924 01:06:45.375404   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.375415   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:45.375423   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:45.375484   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:45.415395   61989 cri.go:89] found id: ""
	I0924 01:06:45.415422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.415432   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:45.415444   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:45.415459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:45.464381   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:45.464416   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:45.478142   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:45.478168   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:45.551211   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:45.551234   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:45.551244   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:45.635255   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:45.635297   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:43.728645   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:46.227611   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.228320   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:45.028650   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.528968   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.406822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:49.407790   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.173687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:48.186635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:48.186710   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:48.219544   61989 cri.go:89] found id: ""
	I0924 01:06:48.219566   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.219574   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:48.219583   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:48.219654   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:48.253594   61989 cri.go:89] found id: ""
	I0924 01:06:48.253618   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.253627   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:48.253634   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:48.253693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:48.287991   61989 cri.go:89] found id: ""
	I0924 01:06:48.288019   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.288031   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:48.288041   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:48.288100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:48.320738   61989 cri.go:89] found id: ""
	I0924 01:06:48.320767   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.320779   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:48.320787   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:48.320847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:48.352197   61989 cri.go:89] found id: ""
	I0924 01:06:48.352225   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.352233   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:48.352243   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:48.352317   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:48.386157   61989 cri.go:89] found id: ""
	I0924 01:06:48.386187   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.386195   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:48.386202   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:48.386250   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:48.422372   61989 cri.go:89] found id: ""
	I0924 01:06:48.422398   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.422407   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:48.422413   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:48.422463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:48.464007   61989 cri.go:89] found id: ""
	I0924 01:06:48.464032   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.464043   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:48.464054   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:48.464072   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.520533   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:48.520570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:48.594453   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:48.594489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:48.607309   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:48.607336   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:48.674078   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:48.674102   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:48.674117   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:50.740093   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.228567   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:50.028640   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:52.527656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.906378   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.906887   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.256855   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:51.270305   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:51.270378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:51.303450   61989 cri.go:89] found id: ""
	I0924 01:06:51.303487   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.303499   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:51.303508   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:51.303564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:51.336959   61989 cri.go:89] found id: ""
	I0924 01:06:51.336987   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.337003   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:51.337010   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:51.337072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:51.369210   61989 cri.go:89] found id: ""
	I0924 01:06:51.369239   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.369249   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:51.369260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:51.369339   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:51.403595   61989 cri.go:89] found id: ""
	I0924 01:06:51.403645   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.403658   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:51.403666   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:51.403723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:51.445459   61989 cri.go:89] found id: ""
	I0924 01:06:51.445493   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.445503   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:51.445510   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:51.445574   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:51.477615   61989 cri.go:89] found id: ""
	I0924 01:06:51.477642   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.477653   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:51.477660   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:51.477722   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:51.509737   61989 cri.go:89] found id: ""
	I0924 01:06:51.509766   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.509784   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:51.509792   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:51.509856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:51.546451   61989 cri.go:89] found id: ""
	I0924 01:06:51.546479   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.546489   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:51.546501   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:51.546515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:51.600277   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:51.600315   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:51.613403   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:51.613434   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:51.691645   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:51.691669   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:51.691688   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.772276   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:51.772312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:54.313491   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:54.328265   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:54.328374   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:54.368091   61989 cri.go:89] found id: ""
	I0924 01:06:54.368117   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.368126   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:54.368131   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:54.368183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:54.408272   61989 cri.go:89] found id: ""
	I0924 01:06:54.408300   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.408310   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:54.408318   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:54.408409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:54.460467   61989 cri.go:89] found id: ""
	I0924 01:06:54.460489   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.460499   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:54.460506   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:54.460564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:54.493310   61989 cri.go:89] found id: ""
	I0924 01:06:54.493334   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.493343   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:54.493349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:54.493401   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:54.526772   61989 cri.go:89] found id: ""
	I0924 01:06:54.526799   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.526809   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:54.526817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:54.526880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:54.562235   61989 cri.go:89] found id: ""
	I0924 01:06:54.562264   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.562274   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:54.562283   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:54.562345   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:54.597755   61989 cri.go:89] found id: ""
	I0924 01:06:54.597784   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.597794   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:54.597803   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:54.597851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:54.632225   61989 cri.go:89] found id: ""
	I0924 01:06:54.632282   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.632295   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:54.632305   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:54.632321   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:54.683849   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:54.683887   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:54.697395   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:54.697425   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:54.767577   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:54.767598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:54.767609   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:54.842619   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:54.842655   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:55.728756   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:58.228520   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:54.528783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.028039   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:59.028234   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:55.907673   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.907858   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.381394   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:57.394078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:57.394147   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:57.431241   61989 cri.go:89] found id: ""
	I0924 01:06:57.431266   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.431278   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:57.431284   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:57.431352   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:57.468954   61989 cri.go:89] found id: ""
	I0924 01:06:57.468983   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.468994   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:57.469001   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:57.469060   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:57.503518   61989 cri.go:89] found id: ""
	I0924 01:06:57.503550   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.503562   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:57.503570   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:57.503618   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:57.540432   61989 cri.go:89] found id: ""
	I0924 01:06:57.540464   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.540475   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:57.540483   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:57.540548   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:57.574142   61989 cri.go:89] found id: ""
	I0924 01:06:57.574175   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.574187   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:57.574195   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:57.574264   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:57.608505   61989 cri.go:89] found id: ""
	I0924 01:06:57.608528   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.608537   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:57.608543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:57.608589   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:57.644273   61989 cri.go:89] found id: ""
	I0924 01:06:57.644305   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.644317   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:57.644344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:57.644409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:57.682023   61989 cri.go:89] found id: ""
	I0924 01:06:57.682050   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.682060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:57.682072   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:57.682086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:57.732537   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:57.732570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:57.746632   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:57.746663   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:57.813904   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:57.813927   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:57.813947   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:57.891947   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:57.891992   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.432035   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:00.444886   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:00.444966   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:00.482653   61989 cri.go:89] found id: ""
	I0924 01:07:00.482683   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.482694   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:00.482702   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:00.482754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:00.516404   61989 cri.go:89] found id: ""
	I0924 01:07:00.516441   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.516452   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:00.516463   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:00.516527   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:00.552938   61989 cri.go:89] found id: ""
	I0924 01:07:00.552963   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.552971   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:00.552977   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:00.553043   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:00.589143   61989 cri.go:89] found id: ""
	I0924 01:07:00.589170   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.589178   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:00.589184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:00.589235   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:00.625023   61989 cri.go:89] found id: ""
	I0924 01:07:00.625047   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.625059   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:00.625066   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:00.625127   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:00.662904   61989 cri.go:89] found id: ""
	I0924 01:07:00.662936   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.662948   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:00.662959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:00.663022   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:00.702892   61989 cri.go:89] found id: ""
	I0924 01:07:00.702921   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.702932   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:00.702938   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:00.702988   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:00.737010   61989 cri.go:89] found id: ""
	I0924 01:07:00.737039   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.737050   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:00.737061   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:00.737075   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:00.788093   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:00.788132   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:00.801354   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:00.801382   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:00.866830   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:00.866862   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:00.866878   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:00.950034   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:00.950076   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.728279   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.227980   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:01.527849   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.027729   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:00.406445   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:02.407048   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.907569   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.492773   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:03.506158   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:03.506224   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:03.542369   61989 cri.go:89] found id: ""
	I0924 01:07:03.542397   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.542408   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:03.542416   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:03.542473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:03.575019   61989 cri.go:89] found id: ""
	I0924 01:07:03.575046   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.575055   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:03.575060   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:03.575103   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:03.608576   61989 cri.go:89] found id: ""
	I0924 01:07:03.608603   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.608612   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:03.608619   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:03.608684   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:03.642359   61989 cri.go:89] found id: ""
	I0924 01:07:03.642389   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.642400   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:03.642407   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:03.642463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:03.678192   61989 cri.go:89] found id: ""
	I0924 01:07:03.678216   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.678223   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:03.678229   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:03.678285   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:03.711773   61989 cri.go:89] found id: ""
	I0924 01:07:03.711795   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.711803   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:03.711809   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:03.711856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:03.747792   61989 cri.go:89] found id: ""
	I0924 01:07:03.747819   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.747830   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:03.747838   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:03.747901   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:03.783284   61989 cri.go:89] found id: ""
	I0924 01:07:03.783312   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.783320   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:03.783331   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:03.783349   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:03.838704   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:03.838745   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:03.852650   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:03.852675   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:03.922474   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:03.922499   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:03.922511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:03.997349   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:03.997388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:05.228357   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:07.228789   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.028604   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:08.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.908041   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:09.406803   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.537182   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:06.549745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:06.549833   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:06.587879   61989 cri.go:89] found id: ""
	I0924 01:07:06.587910   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.587922   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:06.587930   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:06.587984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:06.623419   61989 cri.go:89] found id: ""
	I0924 01:07:06.623447   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.623456   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:06.623462   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:06.623542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:06.659228   61989 cri.go:89] found id: ""
	I0924 01:07:06.659260   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.659272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:06.659280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:06.659341   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:06.693300   61989 cri.go:89] found id: ""
	I0924 01:07:06.693330   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.693341   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:06.693349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:06.693399   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:06.726237   61989 cri.go:89] found id: ""
	I0924 01:07:06.726267   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.726278   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:06.726286   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:06.726342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:06.760627   61989 cri.go:89] found id: ""
	I0924 01:07:06.760659   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.760670   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:06.760677   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:06.760745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:06.796029   61989 cri.go:89] found id: ""
	I0924 01:07:06.796062   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.796073   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:06.796081   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:06.796136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:06.830197   61989 cri.go:89] found id: ""
	I0924 01:07:06.830230   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.830241   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:06.830251   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:06.830265   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.869055   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:06.869087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:06.923840   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:06.923888   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:06.937510   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:06.937549   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:07.011461   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:07.011482   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:07.011496   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:09.591186   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:09.603900   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:09.603970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:09.639003   61989 cri.go:89] found id: ""
	I0924 01:07:09.639035   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.639046   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:09.639055   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:09.639111   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:09.676494   61989 cri.go:89] found id: ""
	I0924 01:07:09.676528   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.676539   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:09.676547   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:09.676616   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:09.713080   61989 cri.go:89] found id: ""
	I0924 01:07:09.713103   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.713111   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:09.713117   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:09.713174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:09.748425   61989 cri.go:89] found id: ""
	I0924 01:07:09.748449   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.748458   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:09.748465   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:09.748521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:09.782526   61989 cri.go:89] found id: ""
	I0924 01:07:09.782559   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.782576   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:09.782584   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:09.782647   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:09.819137   61989 cri.go:89] found id: ""
	I0924 01:07:09.819159   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.819167   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:09.819173   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:09.819256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:09.852953   61989 cri.go:89] found id: ""
	I0924 01:07:09.852976   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.852984   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:09.852989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:09.853083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:09.887254   61989 cri.go:89] found id: ""
	I0924 01:07:09.887282   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.887293   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:09.887304   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:09.887318   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:09.940029   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:09.940069   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:09.954298   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:09.954331   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:10.028926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:10.028947   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:10.028957   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:10.116722   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:10.116761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:09.728996   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.228342   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:10.527637   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.528324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:11.410452   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:13.906451   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.654245   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:12.668635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:12.668695   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:12.711575   61989 cri.go:89] found id: ""
	I0924 01:07:12.711601   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.711626   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:12.711632   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:12.711682   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:12.746104   61989 cri.go:89] found id: ""
	I0924 01:07:12.746131   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.746141   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:12.746149   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:12.746210   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:12.780229   61989 cri.go:89] found id: ""
	I0924 01:07:12.780260   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.780295   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:12.780303   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:12.780384   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:12.812968   61989 cri.go:89] found id: ""
	I0924 01:07:12.812998   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.813010   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:12.813024   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:12.813090   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:12.844212   61989 cri.go:89] found id: ""
	I0924 01:07:12.844241   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.844253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:12.844260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:12.844343   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:12.878662   61989 cri.go:89] found id: ""
	I0924 01:07:12.878690   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.878700   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:12.878707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:12.878765   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:12.912782   61989 cri.go:89] found id: ""
	I0924 01:07:12.912805   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.912815   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:12.912822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:12.912883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:12.945694   61989 cri.go:89] found id: ""
	I0924 01:07:12.945726   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.945736   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:12.945747   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:12.945761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:12.994841   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:12.994877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:13.009582   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:13.009624   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:13.081972   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:13.081999   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:13.082017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:13.162383   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:13.162420   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:15.704586   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:15.717608   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:15.717677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:15.751794   61989 cri.go:89] found id: ""
	I0924 01:07:15.751829   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.751840   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:15.751848   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:15.751916   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:15.791691   61989 cri.go:89] found id: ""
	I0924 01:07:15.791723   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.791734   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:15.791742   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:15.791805   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:15.827934   61989 cri.go:89] found id: ""
	I0924 01:07:15.827957   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.827965   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:15.827971   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:15.828017   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:15.862489   61989 cri.go:89] found id: ""
	I0924 01:07:15.862518   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.862527   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:15.862532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:15.862577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:15.896754   61989 cri.go:89] found id: ""
	I0924 01:07:15.896786   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.896798   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:15.896804   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:15.896857   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:15.934353   61989 cri.go:89] found id: ""
	I0924 01:07:15.934378   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.934386   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:15.934392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:15.934436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:15.969204   61989 cri.go:89] found id: ""
	I0924 01:07:15.969237   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.969246   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:15.969251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:15.969309   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:14.228949   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.728382   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.027681   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:17.027847   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.907872   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:18.407563   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.008733   61989 cri.go:89] found id: ""
	I0924 01:07:16.008767   61989 logs.go:276] 0 containers: []
	W0924 01:07:16.008780   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:16.008792   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:16.008807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:16.046993   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:16.047024   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:16.098768   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:16.098801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:16.114429   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:16.114472   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:16.187450   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:16.187469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:16.187489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:18.767042   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:18.779825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:18.779899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:18.815410   61989 cri.go:89] found id: ""
	I0924 01:07:18.815436   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.815447   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:18.815454   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:18.815523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:18.849837   61989 cri.go:89] found id: ""
	I0924 01:07:18.849862   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.849872   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:18.849880   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:18.849952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:18.885183   61989 cri.go:89] found id: ""
	I0924 01:07:18.885215   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.885227   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:18.885235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:18.885314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:18.922263   61989 cri.go:89] found id: ""
	I0924 01:07:18.922293   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.922305   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:18.922312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:18.922378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:18.957235   61989 cri.go:89] found id: ""
	I0924 01:07:18.957263   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.957272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:18.957278   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:18.957331   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:18.989846   61989 cri.go:89] found id: ""
	I0924 01:07:18.989870   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.989878   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:18.989884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:18.989931   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:19.027264   61989 cri.go:89] found id: ""
	I0924 01:07:19.027298   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.027308   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:19.027315   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:19.027373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:19.065902   61989 cri.go:89] found id: ""
	I0924 01:07:19.065925   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.065934   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:19.065944   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:19.065959   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:19.115515   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:19.115550   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:19.129761   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:19.129787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:19.200299   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:19.200319   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:19.200351   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:19.282308   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:19.282360   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:18.732314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.227773   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.228957   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:19.528117   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:22.028965   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:20.906860   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.407404   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.819442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:21.834106   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:21.834165   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:21.866953   61989 cri.go:89] found id: ""
	I0924 01:07:21.866988   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.866999   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:21.867008   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:21.867085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:21.902561   61989 cri.go:89] found id: ""
	I0924 01:07:21.902637   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.902654   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:21.902663   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:21.902729   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:21.936883   61989 cri.go:89] found id: ""
	I0924 01:07:21.936926   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.936937   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:21.936943   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:21.936995   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:21.975375   61989 cri.go:89] found id: ""
	I0924 01:07:21.975402   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.975411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:21.975417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:21.975465   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:22.012782   61989 cri.go:89] found id: ""
	I0924 01:07:22.012811   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.012822   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:22.012830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:22.012890   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:22.049344   61989 cri.go:89] found id: ""
	I0924 01:07:22.049370   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.049379   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:22.049385   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:22.049442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:22.088187   61989 cri.go:89] found id: ""
	I0924 01:07:22.088219   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.088230   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:22.088239   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:22.088324   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:22.123357   61989 cri.go:89] found id: ""
	I0924 01:07:22.123386   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.123397   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:22.123408   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:22.123423   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:22.176794   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:22.176828   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:22.192550   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:22.192591   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:22.263854   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:22.263881   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:22.263898   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:22.341735   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:22.341778   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:24.879834   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:24.892429   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:24.892504   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:24.926600   61989 cri.go:89] found id: ""
	I0924 01:07:24.926629   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.926636   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:24.926642   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:24.926689   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:24.960370   61989 cri.go:89] found id: ""
	I0924 01:07:24.960399   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.960408   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:24.960415   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:24.960471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:24.993503   61989 cri.go:89] found id: ""
	I0924 01:07:24.993532   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.993542   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:24.993549   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:24.993611   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:25.028027   61989 cri.go:89] found id: ""
	I0924 01:07:25.028055   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.028065   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:25.028073   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:25.028129   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:25.062947   61989 cri.go:89] found id: ""
	I0924 01:07:25.062981   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.062999   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:25.063009   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:25.063077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:25.098895   61989 cri.go:89] found id: ""
	I0924 01:07:25.098927   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.098939   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:25.098946   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:25.098996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:25.132786   61989 cri.go:89] found id: ""
	I0924 01:07:25.132814   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.132824   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:25.132830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:25.132882   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:25.167603   61989 cri.go:89] found id: ""
	I0924 01:07:25.167634   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.167644   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:25.167656   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:25.167671   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:25.220265   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:25.220303   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:25.234840   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:25.234884   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:25.307459   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:25.307485   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:25.307499   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:25.386496   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:25.386537   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:25.229188   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.728978   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:24.531829   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.027182   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:29.029000   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:25.907018   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:28.406555   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.926064   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:27.939398   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:27.939480   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:27.976184   61989 cri.go:89] found id: ""
	I0924 01:07:27.976215   61989 logs.go:276] 0 containers: []
	W0924 01:07:27.976256   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:27.976265   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:27.976348   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:28.009389   61989 cri.go:89] found id: ""
	I0924 01:07:28.009419   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.009431   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:28.009438   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:28.009501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:28.045562   61989 cri.go:89] found id: ""
	I0924 01:07:28.045594   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.045605   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:28.045613   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:28.045677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:28.085318   61989 cri.go:89] found id: ""
	I0924 01:07:28.085345   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.085357   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:28.085364   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:28.085419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:28.119582   61989 cri.go:89] found id: ""
	I0924 01:07:28.119607   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.119617   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:28.119626   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:28.119690   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:28.151445   61989 cri.go:89] found id: ""
	I0924 01:07:28.151493   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.151505   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:28.151513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:28.151578   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:28.185966   61989 cri.go:89] found id: ""
	I0924 01:07:28.185997   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.186009   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:28.186016   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:28.186078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:28.219012   61989 cri.go:89] found id: ""
	I0924 01:07:28.219037   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.219044   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:28.219052   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:28.219089   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:28.272186   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:28.272222   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:28.286346   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:28.286383   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:28.370949   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:28.370975   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:28.370985   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:28.453740   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:28.453775   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:30.229141   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.728919   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:31.527080   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.028315   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.407040   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.407075   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.407711   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.993536   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:31.006297   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:31.006369   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:31.042081   61989 cri.go:89] found id: ""
	I0924 01:07:31.042114   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.042123   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:31.042129   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:31.042185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:31.077119   61989 cri.go:89] found id: ""
	I0924 01:07:31.077144   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.077153   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:31.077159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:31.077208   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:31.110148   61989 cri.go:89] found id: ""
	I0924 01:07:31.110179   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.110187   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:31.110193   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:31.110246   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:31.143551   61989 cri.go:89] found id: ""
	I0924 01:07:31.143578   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.143585   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:31.143591   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:31.143638   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:31.177212   61989 cri.go:89] found id: ""
	I0924 01:07:31.177262   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.177272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:31.177279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:31.177329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:31.209290   61989 cri.go:89] found id: ""
	I0924 01:07:31.209321   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.209332   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:31.209340   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:31.209398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:31.247299   61989 cri.go:89] found id: ""
	I0924 01:07:31.247334   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.247346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:31.247355   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:31.247419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:31.285010   61989 cri.go:89] found id: ""
	I0924 01:07:31.285047   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.285060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:31.285072   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:31.285087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:31.323819   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:31.323855   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:31.378348   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:31.378388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:31.393944   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:31.393983   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:31.464940   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:31.464966   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:31.464978   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.042144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:34.055183   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:34.055268   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:34.103044   61989 cri.go:89] found id: ""
	I0924 01:07:34.103075   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.103086   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:34.103094   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:34.103162   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:34.141379   61989 cri.go:89] found id: ""
	I0924 01:07:34.141412   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.141424   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:34.141432   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:34.141493   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:34.179545   61989 cri.go:89] found id: ""
	I0924 01:07:34.179574   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.179582   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:34.179588   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:34.179655   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:34.217683   61989 cri.go:89] found id: ""
	I0924 01:07:34.217719   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.217739   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:34.217748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:34.217806   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:34.257597   61989 cri.go:89] found id: ""
	I0924 01:07:34.257630   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.257642   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:34.257651   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:34.257723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:34.295410   61989 cri.go:89] found id: ""
	I0924 01:07:34.295440   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.295452   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:34.295460   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:34.295523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:34.331309   61989 cri.go:89] found id: ""
	I0924 01:07:34.331340   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.331350   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:34.331358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:34.331460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:34.367549   61989 cri.go:89] found id: ""
	I0924 01:07:34.367580   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.367590   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:34.367601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:34.367615   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:34.421785   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:34.421823   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:34.435162   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:34.435198   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:34.504051   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:34.504073   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:34.504090   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.582343   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:34.582384   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:35.229391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.229522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.527047   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.527472   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.906974   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.907529   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.124727   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:37.139374   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:37.139431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:37.176474   61989 cri.go:89] found id: ""
	I0924 01:07:37.176500   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.176510   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:37.176515   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:37.176560   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:37.209944   61989 cri.go:89] found id: ""
	I0924 01:07:37.209971   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.209983   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:37.209990   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:37.210055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:37.242894   61989 cri.go:89] found id: ""
	I0924 01:07:37.242923   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.242933   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:37.242941   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:37.242996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:37.276517   61989 cri.go:89] found id: ""
	I0924 01:07:37.276547   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.276558   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:37.276566   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:37.276626   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:37.310169   61989 cri.go:89] found id: ""
	I0924 01:07:37.310196   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.310207   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:37.310214   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:37.310282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:37.342992   61989 cri.go:89] found id: ""
	I0924 01:07:37.343019   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.343027   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:37.343035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:37.343088   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:37.375024   61989 cri.go:89] found id: ""
	I0924 01:07:37.375051   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.375062   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:37.375069   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:37.375137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:37.409736   61989 cri.go:89] found id: ""
	I0924 01:07:37.409761   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.409768   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:37.409776   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:37.409787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:37.474744   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:37.474767   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:37.474783   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:37.551479   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:37.551515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.590597   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:37.590632   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:37.642781   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:37.642820   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.156480   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:40.171002   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:40.171079   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:40.207383   61989 cri.go:89] found id: ""
	I0924 01:07:40.207410   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.207418   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:40.207424   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:40.207474   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:40.245535   61989 cri.go:89] found id: ""
	I0924 01:07:40.245560   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.245568   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:40.245574   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:40.245620   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:40.283858   61989 cri.go:89] found id: ""
	I0924 01:07:40.283888   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.283900   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:40.283909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:40.283982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:40.320527   61989 cri.go:89] found id: ""
	I0924 01:07:40.320555   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.320566   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:40.320575   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:40.320633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:40.354364   61989 cri.go:89] found id: ""
	I0924 01:07:40.354390   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.354397   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:40.354403   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:40.354473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:40.388407   61989 cri.go:89] found id: ""
	I0924 01:07:40.388431   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.388439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:40.388444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:40.388512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:40.423809   61989 cri.go:89] found id: ""
	I0924 01:07:40.423838   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.423847   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:40.423853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:40.423908   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:40.459160   61989 cri.go:89] found id: ""
	I0924 01:07:40.459188   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.459199   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:40.459210   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:40.459223   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:40.530418   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:40.530456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.551644   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:40.551683   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:40.634564   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:40.634587   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:40.634599   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:40.717897   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:40.717934   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:39.728642   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.728725   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:40.528294   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.028364   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.406835   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.907015   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.257992   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:43.272134   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:43.272204   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:43.306747   61989 cri.go:89] found id: ""
	I0924 01:07:43.306775   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.306797   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:43.306806   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:43.306923   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:43.342922   61989 cri.go:89] found id: ""
	I0924 01:07:43.342954   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.342963   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:43.342974   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:43.343028   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:43.378666   61989 cri.go:89] found id: ""
	I0924 01:07:43.378694   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.378703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:43.378709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:43.378760   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:43.414348   61989 cri.go:89] found id: ""
	I0924 01:07:43.414376   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.414387   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:43.414395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:43.414457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:43.447687   61989 cri.go:89] found id: ""
	I0924 01:07:43.447718   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.447728   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:43.447735   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:43.447804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:43.482166   61989 cri.go:89] found id: ""
	I0924 01:07:43.482195   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.482205   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:43.482211   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:43.482275   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:43.518112   61989 cri.go:89] found id: ""
	I0924 01:07:43.518146   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.518159   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:43.518167   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:43.518231   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:43.553853   61989 cri.go:89] found id: ""
	I0924 01:07:43.553875   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.553883   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:43.553891   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:43.553902   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:43.603410   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:43.603445   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:43.616413   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:43.616438   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:43.685077   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:43.685101   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:43.685113   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:43.760758   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:43.760803   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:43.729237   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.228084   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.228503   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:45.527095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:47.529540   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.407150   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.407253   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.300532   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:46.315982   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:46.316050   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:46.356523   61989 cri.go:89] found id: ""
	I0924 01:07:46.356554   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.356565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:46.356573   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:46.356633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:46.405399   61989 cri.go:89] found id: ""
	I0924 01:07:46.405429   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.405439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:46.405447   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:46.405512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:46.454819   61989 cri.go:89] found id: ""
	I0924 01:07:46.454844   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.454853   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:46.454858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:46.454918   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:46.499094   61989 cri.go:89] found id: ""
	I0924 01:07:46.499123   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.499134   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:46.499142   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:46.499196   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:46.532976   61989 cri.go:89] found id: ""
	I0924 01:07:46.533006   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.533017   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:46.533025   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:46.533083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:46.565488   61989 cri.go:89] found id: ""
	I0924 01:07:46.565523   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.565534   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:46.565546   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:46.565610   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:46.598457   61989 cri.go:89] found id: ""
	I0924 01:07:46.598486   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.598496   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:46.598503   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:46.598551   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:46.631892   61989 cri.go:89] found id: ""
	I0924 01:07:46.631920   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.631931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:46.631941   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:46.631956   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:46.709966   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:46.710013   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.749154   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:46.749184   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:46.798192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:46.798228   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:46.811902   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:46.811951   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:46.885878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.386775   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:49.399324   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:49.399383   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:49.437061   61989 cri.go:89] found id: ""
	I0924 01:07:49.437092   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.437104   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:49.437111   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:49.437160   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:49.470882   61989 cri.go:89] found id: ""
	I0924 01:07:49.470908   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.470919   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:49.470927   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:49.470989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:49.506894   61989 cri.go:89] found id: ""
	I0924 01:07:49.506926   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.506938   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:49.506947   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:49.507018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:49.540768   61989 cri.go:89] found id: ""
	I0924 01:07:49.540800   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.540813   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:49.540822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:49.540888   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:49.576486   61989 cri.go:89] found id: ""
	I0924 01:07:49.576515   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.576523   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:49.576530   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:49.576579   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:49.612456   61989 cri.go:89] found id: ""
	I0924 01:07:49.612479   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.612487   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:49.612495   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:49.612542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:49.646085   61989 cri.go:89] found id: ""
	I0924 01:07:49.646118   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.646127   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:49.646132   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:49.646178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:49.682538   61989 cri.go:89] found id: ""
	I0924 01:07:49.682565   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.682574   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:49.682583   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:49.682594   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:49.721791   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:49.721817   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:49.774842   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:49.774889   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:49.789082   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:49.789129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:49.866437   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.866464   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:49.866478   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:50.727581   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.027396   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.028176   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.407654   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.908118   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.445166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:52.459060   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:52.459126   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:52.496521   61989 cri.go:89] found id: ""
	I0924 01:07:52.496550   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.496562   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:52.496571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:52.496652   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:52.533575   61989 cri.go:89] found id: ""
	I0924 01:07:52.533600   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.533608   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:52.533615   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:52.533693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:52.571666   61989 cri.go:89] found id: ""
	I0924 01:07:52.571693   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.571703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:52.571710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:52.571758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:52.603929   61989 cri.go:89] found id: ""
	I0924 01:07:52.603957   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.603968   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:52.603976   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:52.604034   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:52.635581   61989 cri.go:89] found id: ""
	I0924 01:07:52.635607   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.635614   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:52.635620   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:52.635669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:52.673865   61989 cri.go:89] found id: ""
	I0924 01:07:52.673889   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.673897   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:52.673903   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:52.673953   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:52.709885   61989 cri.go:89] found id: ""
	I0924 01:07:52.709910   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.709918   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:52.709925   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:52.709986   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:52.746409   61989 cri.go:89] found id: ""
	I0924 01:07:52.746439   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.746450   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:52.746461   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:52.746475   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:52.798020   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:52.798054   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:52.811940   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:52.811967   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:52.888091   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:52.888114   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:52.888129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.968955   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:52.969000   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:55.507204   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:55.520581   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:55.520657   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:55.555772   61989 cri.go:89] found id: ""
	I0924 01:07:55.555809   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.555821   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:55.555828   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:55.555880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:55.593765   61989 cri.go:89] found id: ""
	I0924 01:07:55.593791   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.593802   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:55.593808   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:55.593866   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:55.630292   61989 cri.go:89] found id: ""
	I0924 01:07:55.630325   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.630337   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:55.630344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:55.630408   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:55.665703   61989 cri.go:89] found id: ""
	I0924 01:07:55.665730   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.665741   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:55.665748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:55.665807   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:55.701911   61989 cri.go:89] found id: ""
	I0924 01:07:55.701938   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.701949   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:55.701957   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:55.702020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:55.734343   61989 cri.go:89] found id: ""
	I0924 01:07:55.734373   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.734385   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:55.734394   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:55.734460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:55.768606   61989 cri.go:89] found id: ""
	I0924 01:07:55.768633   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.768645   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:55.768653   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:55.768716   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:55.800720   61989 cri.go:89] found id: ""
	I0924 01:07:55.800747   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.800757   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:55.800768   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:55.800782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:55.851702   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:55.851737   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:55.865657   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:55.865687   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:55.940175   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:55.940197   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:55.940207   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:55.227954   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.228969   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:54.528417   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.529326   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:59.027653   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:55.407038   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.906886   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.015832   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:56.015870   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:58.557571   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:58.572208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:58.572274   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:58.605081   61989 cri.go:89] found id: ""
	I0924 01:07:58.605109   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.605121   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:58.605128   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:58.605185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:58.641518   61989 cri.go:89] found id: ""
	I0924 01:07:58.641548   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.641559   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:58.641566   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:58.641617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:58.680623   61989 cri.go:89] found id: ""
	I0924 01:07:58.680653   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.680664   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:58.680675   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:58.680735   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:58.713658   61989 cri.go:89] found id: ""
	I0924 01:07:58.713684   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.713693   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:58.713700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:58.713754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:58.746264   61989 cri.go:89] found id: ""
	I0924 01:07:58.746298   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.746307   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:58.746313   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:58.746358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:58.779812   61989 cri.go:89] found id: ""
	I0924 01:07:58.779846   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.779912   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:58.779924   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:58.779984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:58.813203   61989 cri.go:89] found id: ""
	I0924 01:07:58.813236   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.813245   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:58.813252   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:58.813303   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:58.845872   61989 cri.go:89] found id: ""
	I0924 01:07:58.845898   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.845906   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:58.845915   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:58.845925   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:58.897480   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:58.897515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:58.912904   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:58.912936   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:58.982882   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:58.982908   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:58.982921   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:59.058495   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:59.058535   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:59.729215   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.228358   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.028678   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:03.527682   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:00.407897   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.907608   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:04.907717   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.596672   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:01.609550   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:01.609625   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:01.648819   61989 cri.go:89] found id: ""
	I0924 01:08:01.648847   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.648857   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:01.648864   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:01.649000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:01.685419   61989 cri.go:89] found id: ""
	I0924 01:08:01.685450   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.685458   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:01.685464   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:01.685533   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:01.720426   61989 cri.go:89] found id: ""
	I0924 01:08:01.720455   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.720464   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:01.720473   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:01.720537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:01.755292   61989 cri.go:89] found id: ""
	I0924 01:08:01.755316   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.755324   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:01.755331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:01.755398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:01.788673   61989 cri.go:89] found id: ""
	I0924 01:08:01.788703   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.788713   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:01.788721   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:01.788789   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:01.824724   61989 cri.go:89] found id: ""
	I0924 01:08:01.824761   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.824773   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:01.824781   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:01.824838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:01.858492   61989 cri.go:89] found id: ""
	I0924 01:08:01.858531   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.858542   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:01.858556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:01.858623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:01.892135   61989 cri.go:89] found id: ""
	I0924 01:08:01.892167   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.892177   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:01.892192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:01.892205   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:01.905820   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:01.905849   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:01.977998   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:01.978026   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:01.978039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:02.060441   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:02.060480   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:02.100029   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:02.100057   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.653124   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:04.665726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:04.665784   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:04.700755   61989 cri.go:89] found id: ""
	I0924 01:08:04.700785   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.700796   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:04.700804   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:04.700858   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:04.736955   61989 cri.go:89] found id: ""
	I0924 01:08:04.736983   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.736992   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:04.736998   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:04.737051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:04.770940   61989 cri.go:89] found id: ""
	I0924 01:08:04.770969   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.770977   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:04.770983   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:04.771051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:04.805376   61989 cri.go:89] found id: ""
	I0924 01:08:04.805403   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.805411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:04.805417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:04.805471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:04.840995   61989 cri.go:89] found id: ""
	I0924 01:08:04.841016   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.841024   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:04.841030   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:04.841077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:04.875418   61989 cri.go:89] found id: ""
	I0924 01:08:04.875449   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.875460   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:04.875468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:04.875546   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:04.910675   61989 cri.go:89] found id: ""
	I0924 01:08:04.910696   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.910704   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:04.910710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:04.910764   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:04.945531   61989 cri.go:89] found id: ""
	I0924 01:08:04.945562   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.945570   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:04.945578   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:04.945589   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.997696   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:04.997734   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:05.011296   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:05.011329   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:05.087878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:05.087905   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:05.087919   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:05.164073   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:05.164111   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:04.228985   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.734525   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.031377   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:08.528160   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.908017   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:09.407255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:07.713496   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:07.726590   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:07.726649   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:07.760050   61989 cri.go:89] found id: ""
	I0924 01:08:07.760081   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.760092   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:07.760100   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:07.760152   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:07.797709   61989 cri.go:89] found id: ""
	I0924 01:08:07.797736   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.797744   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:07.797749   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:07.797803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:07.836351   61989 cri.go:89] found id: ""
	I0924 01:08:07.836380   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.836391   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:07.836399   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:07.836471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:07.871133   61989 cri.go:89] found id: ""
	I0924 01:08:07.871159   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.871170   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:07.871178   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:07.871229   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:07.906640   61989 cri.go:89] found id: ""
	I0924 01:08:07.906663   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.906673   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:07.906682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:07.906741   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:07.940919   61989 cri.go:89] found id: ""
	I0924 01:08:07.940945   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.940953   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:07.940959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:07.941018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:07.975533   61989 cri.go:89] found id: ""
	I0924 01:08:07.975562   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.975570   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:07.975576   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:07.975627   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:08.009137   61989 cri.go:89] found id: ""
	I0924 01:08:08.009163   61989 logs.go:276] 0 containers: []
	W0924 01:08:08.009173   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:08.009183   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:08.009196   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:08.065199   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:08.065252   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:08.080159   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:08.080188   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:08.154003   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:08.154025   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:08.154039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:08.235522   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:08.235561   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:10.774666   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:10.787704   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:10.787775   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:10.822721   61989 cri.go:89] found id: ""
	I0924 01:08:10.822759   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.822770   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:10.822781   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:10.822852   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:10.857113   61989 cri.go:89] found id: ""
	I0924 01:08:10.857138   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.857146   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:10.857152   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:10.857201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:10.890974   61989 cri.go:89] found id: ""
	I0924 01:08:10.891001   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.891012   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:10.891020   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:10.891086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:10.929771   61989 cri.go:89] found id: ""
	I0924 01:08:10.929793   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.929800   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:10.929806   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:10.929851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:10.961988   61989 cri.go:89] found id: ""
	I0924 01:08:10.962015   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.962027   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:10.962035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:10.962100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:09.228600   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.729142   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.528626   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.027656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.906981   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.907232   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.993591   61989 cri.go:89] found id: ""
	I0924 01:08:10.993622   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.993633   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:10.993639   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:10.993691   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:11.032468   61989 cri.go:89] found id: ""
	I0924 01:08:11.032496   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.032506   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:11.032514   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:11.032576   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:11.066900   61989 cri.go:89] found id: ""
	I0924 01:08:11.066924   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.066931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:11.066939   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:11.066950   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:11.136412   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:11.136443   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:11.136459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:11.218326   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:11.218361   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:11.260695   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:11.260728   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:11.310102   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:11.310133   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:13.825540   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:13.838208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:13.838283   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:13.874539   61989 cri.go:89] found id: ""
	I0924 01:08:13.874567   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.874576   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:13.874581   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:13.874628   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:13.911818   61989 cri.go:89] found id: ""
	I0924 01:08:13.911839   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.911846   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:13.911852   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:13.911897   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:13.944766   61989 cri.go:89] found id: ""
	I0924 01:08:13.944789   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.944797   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:13.944802   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:13.944847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:13.980712   61989 cri.go:89] found id: ""
	I0924 01:08:13.980742   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.980752   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:13.980758   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:13.980817   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:14.016103   61989 cri.go:89] found id: ""
	I0924 01:08:14.016130   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.016138   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:14.016143   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:14.016192   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:14.051884   61989 cri.go:89] found id: ""
	I0924 01:08:14.051929   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.051943   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:14.051954   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:14.052046   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:14.088928   61989 cri.go:89] found id: ""
	I0924 01:08:14.088954   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.088964   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:14.088970   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:14.089020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:14.123057   61989 cri.go:89] found id: ""
	I0924 01:08:14.123083   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.123091   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:14.123099   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:14.123112   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:14.174249   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:14.174287   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:14.188409   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:14.188442   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:14.258906   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:14.258932   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:14.258942   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:14.340891   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:14.340928   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:14.229459   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:16.728316   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.028158   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.527615   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.907490   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.907845   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.901512   61323 pod_ready.go:82] duration metric: took 4m0.001092501s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:19.901552   61323 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:08:19.901576   61323 pod_ready.go:39] duration metric: took 4m10.04955154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:19.901606   61323 kubeadm.go:597] duration metric: took 4m18.184472182s to restartPrimaryControlPlane
	W0924 01:08:19.901701   61323 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:19.901736   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:16.877728   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:16.890548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:16.890617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:16.924414   61989 cri.go:89] found id: ""
	I0924 01:08:16.924439   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.924451   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:16.924458   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:16.924510   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:16.960295   61989 cri.go:89] found id: ""
	I0924 01:08:16.960323   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.960344   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:16.960352   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:16.960405   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:16.993171   61989 cri.go:89] found id: ""
	I0924 01:08:16.993204   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.993216   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:16.993224   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:16.993287   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:17.028122   61989 cri.go:89] found id: ""
	I0924 01:08:17.028150   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.028160   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:17.028169   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:17.028261   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:17.068401   61989 cri.go:89] found id: ""
	I0924 01:08:17.068440   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.068451   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:17.068458   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:17.068530   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:17.104250   61989 cri.go:89] found id: ""
	I0924 01:08:17.104275   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.104283   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:17.104299   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:17.104370   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:17.139178   61989 cri.go:89] found id: ""
	I0924 01:08:17.139201   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.139209   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:17.139215   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:17.139288   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:17.172677   61989 cri.go:89] found id: ""
	I0924 01:08:17.172703   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.172712   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:17.172727   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:17.172742   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:17.222039   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:17.222082   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:17.235342   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:17.235371   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:17.300313   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:17.300350   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:17.300366   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:17.382465   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:17.382517   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.924928   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:19.941406   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:19.941496   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:19.976196   61989 cri.go:89] found id: ""
	I0924 01:08:19.976224   61989 logs.go:276] 0 containers: []
	W0924 01:08:19.976238   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:19.976247   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:19.976314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:20.019652   61989 cri.go:89] found id: ""
	I0924 01:08:20.019680   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.019692   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:20.019699   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:20.019757   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:20.055098   61989 cri.go:89] found id: ""
	I0924 01:08:20.055123   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.055130   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:20.055135   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:20.055183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:20.091428   61989 cri.go:89] found id: ""
	I0924 01:08:20.091458   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.091469   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:20.091476   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:20.091532   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:20.123608   61989 cri.go:89] found id: ""
	I0924 01:08:20.123642   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.123653   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:20.123678   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:20.123745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:20.165885   61989 cri.go:89] found id: ""
	I0924 01:08:20.165913   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.165926   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:20.165934   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:20.165985   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:20.199300   61989 cri.go:89] found id: ""
	I0924 01:08:20.199329   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.199341   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:20.199348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:20.199415   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:20.237201   61989 cri.go:89] found id: ""
	I0924 01:08:20.237253   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.237262   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:20.237271   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:20.237284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:20.285008   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:20.285049   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:20.298974   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:20.299014   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:20.385765   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:20.385793   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:20.385807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:20.460715   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:20.460752   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.227947   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.228448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.229022   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.527785   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.528095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.528420   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.000163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:23.014755   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:23.014828   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:23.048877   61989 cri.go:89] found id: ""
	I0924 01:08:23.048909   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.048921   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:23.048979   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:23.049049   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:23.085614   61989 cri.go:89] found id: ""
	I0924 01:08:23.085643   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.085650   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:23.085658   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:23.085718   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:23.122027   61989 cri.go:89] found id: ""
	I0924 01:08:23.122060   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.122071   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:23.122078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:23.122136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:23.156838   61989 cri.go:89] found id: ""
	I0924 01:08:23.156868   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.156879   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:23.156887   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:23.156947   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:23.191528   61989 cri.go:89] found id: ""
	I0924 01:08:23.191569   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.191579   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:23.191586   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:23.191651   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:23.227627   61989 cri.go:89] found id: ""
	I0924 01:08:23.227651   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.227659   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:23.227665   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:23.227709   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:23.261937   61989 cri.go:89] found id: ""
	I0924 01:08:23.261968   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.261980   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:23.261988   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:23.262039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:23.297947   61989 cri.go:89] found id: ""
	I0924 01:08:23.297973   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.297986   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:23.297997   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:23.298009   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.337783   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:23.337811   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:23.390767   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:23.390808   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:23.404787   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:23.404814   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:23.478768   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:23.478788   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:23.478801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:25.728154   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.227795   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:25.529710   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.028153   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:26.060593   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:26.085071   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:26.085137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:26.121785   61989 cri.go:89] found id: ""
	I0924 01:08:26.121814   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.121826   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:26.121834   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:26.121900   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:26.167942   61989 cri.go:89] found id: ""
	I0924 01:08:26.167971   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.167980   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:26.167991   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:26.168054   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:26.206461   61989 cri.go:89] found id: ""
	I0924 01:08:26.206496   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.206506   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:26.206513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:26.206582   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:26.243094   61989 cri.go:89] found id: ""
	I0924 01:08:26.243125   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.243136   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:26.243144   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:26.243206   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:26.279303   61989 cri.go:89] found id: ""
	I0924 01:08:26.279331   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.279341   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:26.279348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:26.279407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:26.311840   61989 cri.go:89] found id: ""
	I0924 01:08:26.311869   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.311880   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:26.311888   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:26.311954   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:26.345994   61989 cri.go:89] found id: ""
	I0924 01:08:26.346019   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.346027   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:26.346033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:26.346082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:26.380570   61989 cri.go:89] found id: ""
	I0924 01:08:26.380601   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.380610   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:26.380619   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:26.380630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:26.429958   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:26.429993   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:26.443278   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:26.443312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:26.516353   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:26.516375   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:26.516390   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.603310   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:26.603345   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.142531   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:29.156548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:29.156634   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:29.191351   61989 cri.go:89] found id: ""
	I0924 01:08:29.191378   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.191389   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:29.191396   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:29.191451   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:29.232112   61989 cri.go:89] found id: ""
	I0924 01:08:29.232141   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.232152   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:29.232159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:29.232214   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:29.266082   61989 cri.go:89] found id: ""
	I0924 01:08:29.266104   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.266112   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:29.266118   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:29.266178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:29.299777   61989 cri.go:89] found id: ""
	I0924 01:08:29.299802   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.299812   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:29.299817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:29.299883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:29.342709   61989 cri.go:89] found id: ""
	I0924 01:08:29.342740   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.342749   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:29.342756   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:29.342816   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:29.381255   61989 cri.go:89] found id: ""
	I0924 01:08:29.381303   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.381312   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:29.381318   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:29.381375   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:29.414998   61989 cri.go:89] found id: ""
	I0924 01:08:29.415028   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.415036   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:29.415043   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:29.415101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:29.448553   61989 cri.go:89] found id: ""
	I0924 01:08:29.448580   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.448589   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:29.448598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:29.448608   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:29.534936   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:29.535001   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.573554   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:29.573584   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:29.623590   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:29.623626   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:29.636141   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:29.636167   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:29.700591   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:30.228993   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.229458   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:30.528150   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:33.029011   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.201184   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:32.215034   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:32.215102   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:32.250990   61989 cri.go:89] found id: ""
	I0924 01:08:32.251016   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.251026   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:32.251033   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:32.251104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:32.284448   61989 cri.go:89] found id: ""
	I0924 01:08:32.284483   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.284494   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:32.284504   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:32.284570   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:32.317979   61989 cri.go:89] found id: ""
	I0924 01:08:32.318004   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.318015   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:32.318022   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:32.318078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:32.352057   61989 cri.go:89] found id: ""
	I0924 01:08:32.352082   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.352093   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:32.352101   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:32.352163   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:32.385459   61989 cri.go:89] found id: ""
	I0924 01:08:32.385482   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.385490   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:32.385496   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:32.385544   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:32.421189   61989 cri.go:89] found id: ""
	I0924 01:08:32.421217   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.421227   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:32.421235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:32.421307   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:32.464375   61989 cri.go:89] found id: ""
	I0924 01:08:32.464399   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.464406   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:32.464412   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:32.464457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:32.512716   61989 cri.go:89] found id: ""
	I0924 01:08:32.512742   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.512753   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:32.512763   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:32.512788   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:32.598271   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.598293   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:32.598305   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:32.674197   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:32.674233   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:32.715065   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:32.715092   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:32.767522   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:32.767565   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.281678   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:35.296302   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:35.296390   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:35.336341   61989 cri.go:89] found id: ""
	I0924 01:08:35.336370   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.336381   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:35.336397   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:35.336454   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:35.373090   61989 cri.go:89] found id: ""
	I0924 01:08:35.373118   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.373127   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:35.373135   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:35.373201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:35.413628   61989 cri.go:89] found id: ""
	I0924 01:08:35.413660   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.413668   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:35.413674   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:35.413720   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:35.446564   61989 cri.go:89] found id: ""
	I0924 01:08:35.446592   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.446603   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:35.446610   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:35.446669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:35.478389   61989 cri.go:89] found id: ""
	I0924 01:08:35.478424   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.478435   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:35.478444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:35.478515   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:35.513992   61989 cri.go:89] found id: ""
	I0924 01:08:35.514015   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.514023   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:35.514029   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:35.514085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:35.556442   61989 cri.go:89] found id: ""
	I0924 01:08:35.556471   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.556481   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:35.556489   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:35.556571   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:35.594205   61989 cri.go:89] found id: ""
	I0924 01:08:35.594228   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.594236   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:35.594244   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:35.594254   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:35.637601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:35.637640   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:35.691674   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:35.691711   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.705223   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:35.705261   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:35.784000   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:35.784021   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:35.784036   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:34.729064   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:37.227314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:35.528382   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.028508   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.370232   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:38.383287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:38.383358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:38.417528   61989 cri.go:89] found id: ""
	I0924 01:08:38.417556   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.417564   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:38.417571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:38.417619   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:38.459788   61989 cri.go:89] found id: ""
	I0924 01:08:38.459814   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.459821   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:38.459828   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:38.459883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:38.494017   61989 cri.go:89] found id: ""
	I0924 01:08:38.494050   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.494059   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:38.494065   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:38.494135   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:38.526894   61989 cri.go:89] found id: ""
	I0924 01:08:38.526924   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.526935   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:38.526942   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:38.527000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:38.563831   61989 cri.go:89] found id: ""
	I0924 01:08:38.563859   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.563876   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:38.563884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:38.563950   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:38.596066   61989 cri.go:89] found id: ""
	I0924 01:08:38.596095   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.596106   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:38.596114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:38.596172   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:38.630123   61989 cri.go:89] found id: ""
	I0924 01:08:38.630147   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.630157   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:38.630165   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:38.630223   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:38.664714   61989 cri.go:89] found id: ""
	I0924 01:08:38.664743   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.664754   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:38.664765   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:38.664782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:38.718770   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:38.718802   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:38.732878   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:38.732906   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:38.806441   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:38.806469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:38.806485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.884416   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:38.884456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:39.228048   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.228574   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:40.527354   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:42.528592   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.423782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:41.436827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:41.436899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:41.468283   61989 cri.go:89] found id: ""
	I0924 01:08:41.468316   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.468342   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:41.468353   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:41.468412   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:41.504348   61989 cri.go:89] found id: ""
	I0924 01:08:41.504380   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.504402   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:41.504410   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:41.504470   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:41.544785   61989 cri.go:89] found id: ""
	I0924 01:08:41.544809   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.544818   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:41.544825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:41.544883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:41.582924   61989 cri.go:89] found id: ""
	I0924 01:08:41.582954   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.582965   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:41.582973   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:41.583037   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:41.618220   61989 cri.go:89] found id: ""
	I0924 01:08:41.618243   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.618253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:41.618260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:41.618329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:41.653369   61989 cri.go:89] found id: ""
	I0924 01:08:41.653392   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.653400   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:41.653416   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:41.653477   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:41.687036   61989 cri.go:89] found id: ""
	I0924 01:08:41.687058   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.687069   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:41.687077   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:41.687133   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:41.720701   61989 cri.go:89] found id: ""
	I0924 01:08:41.720732   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.720744   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:41.720756   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:41.720776   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:41.798436   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:41.798486   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.842639   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:41.842674   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:41.893053   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:41.893086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:41.907757   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:41.907784   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:41.973466   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.474071   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:44.487057   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:44.487119   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:44.521772   61989 cri.go:89] found id: ""
	I0924 01:08:44.521813   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.521835   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:44.521843   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:44.521905   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:44.554928   61989 cri.go:89] found id: ""
	I0924 01:08:44.554956   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.554966   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:44.554977   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:44.555042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:44.594246   61989 cri.go:89] found id: ""
	I0924 01:08:44.594279   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.594292   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:44.594298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:44.594344   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:44.629779   61989 cri.go:89] found id: ""
	I0924 01:08:44.629807   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.629819   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:44.629827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:44.629884   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:44.671671   61989 cri.go:89] found id: ""
	I0924 01:08:44.671694   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.671701   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:44.671707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:44.671772   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:44.710875   61989 cri.go:89] found id: ""
	I0924 01:08:44.710910   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.710922   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:44.710931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:44.711000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:44.744345   61989 cri.go:89] found id: ""
	I0924 01:08:44.744381   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.744389   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:44.744395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:44.744442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:44.780771   61989 cri.go:89] found id: ""
	I0924 01:08:44.780797   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.780804   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:44.780812   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:44.780824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:44.834902   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:44.834958   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:44.848503   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:44.848540   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:44.923117   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.923138   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:44.923150   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:45.003806   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:45.003840   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.184585   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.282824063s)
	I0924 01:08:46.184659   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:46.201715   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:46.215637   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:46.228701   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:46.228726   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:46.228769   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:46.239005   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:46.239065   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:46.250336   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:46.259889   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:46.259961   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:46.271773   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.283106   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:46.283175   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.293325   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:46.306026   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:46.306111   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:46.318859   61323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:46.373819   61323 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:08:46.373973   61323 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:46.487006   61323 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:46.487146   61323 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:46.487299   61323 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:08:46.495557   61323 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:46.497537   61323 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:46.497645   61323 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:46.497732   61323 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:46.497853   61323 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:46.497946   61323 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:46.498041   61323 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:46.498116   61323 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:46.498197   61323 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:46.498280   61323 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:46.498389   61323 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:46.498490   61323 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:46.498547   61323 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:46.498623   61323 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:46.714556   61323 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:46.815030   61323 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:08:47.011082   61323 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:47.227052   61323 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:47.488776   61323 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:47.489403   61323 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:47.491864   61323 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:43.728646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:46.234412   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029064   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029109   61699 pod_ready.go:82] duration metric: took 4m0.007887151s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:45.029124   61699 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:08:45.029133   61699 pod_ready.go:39] duration metric: took 4m5.860472412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:45.029153   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:08:45.029189   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:45.029267   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:45.084875   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:45.084899   61699 cri.go:89] found id: ""
	I0924 01:08:45.084907   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:45.084955   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.089534   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:45.089601   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:45.133457   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:45.133479   61699 cri.go:89] found id: ""
	I0924 01:08:45.133486   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:45.133544   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.137523   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:45.137586   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:45.173989   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.174014   61699 cri.go:89] found id: ""
	I0924 01:08:45.174023   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:45.174083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.178084   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:45.178168   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:45.215763   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:45.215790   61699 cri.go:89] found id: ""
	I0924 01:08:45.215799   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:45.215851   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.220052   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:45.220137   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:45.258186   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.258206   61699 cri.go:89] found id: ""
	I0924 01:08:45.258213   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:45.258272   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.262402   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:45.262481   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:45.299355   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.299385   61699 cri.go:89] found id: ""
	I0924 01:08:45.299397   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:45.299452   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.303404   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:45.303505   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:45.341412   61699 cri.go:89] found id: ""
	I0924 01:08:45.341438   61699 logs.go:276] 0 containers: []
	W0924 01:08:45.341446   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:45.341452   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:45.341508   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:45.377419   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:45.377450   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:45.377457   61699 cri.go:89] found id: ""
	I0924 01:08:45.377471   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:45.377539   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.381497   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.385182   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:45.385201   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:45.455618   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:45.455661   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.495007   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:45.495037   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.530196   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:45.530230   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.581671   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:45.581709   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:46.122674   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:46.122717   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.169928   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:46.169965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:46.184617   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:46.184645   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:46.330482   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:46.330512   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:46.382927   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:46.382965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:46.441408   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:46.441442   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:46.484956   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:46.484985   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:46.522559   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:46.522595   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.064954   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:49.086621   61699 api_server.go:72] duration metric: took 4m15.650065328s to wait for apiserver process to appear ...
	I0924 01:08:49.086648   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:08:49.086695   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:49.086760   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:47.541843   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:47.555428   61989 kubeadm.go:597] duration metric: took 4m2.297219084s to restartPrimaryControlPlane
	W0924 01:08:47.555528   61989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:47.555560   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:49.123410   61989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.567825503s)
	I0924 01:08:49.123501   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:49.142686   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:49.154484   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:49.166734   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:49.166759   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:49.166813   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:49.178374   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:49.178517   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:49.188871   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:49.200190   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:49.200258   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:49.212895   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.225205   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:49.225276   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.237828   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:49.249686   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:49.249751   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:49.262505   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:49.338624   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:08:49.338712   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:49.509271   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:49.509489   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:49.509636   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:08:49.724434   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:47.494323   61323 out.go:235]   - Booting up control plane ...
	I0924 01:08:47.494449   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:47.494527   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:47.494904   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:47.511889   61323 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:47.518272   61323 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:47.518343   61323 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:47.654121   61323 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:08:47.654273   61323 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:08:48.156008   61323 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075879ms
	I0924 01:08:48.156089   61323 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:08:49.726458   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:49.726563   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:49.726639   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:49.726737   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:49.726812   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:49.727078   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:49.727375   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:49.728123   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:49.729254   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:49.730178   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:49.732548   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:49.732604   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:49.732676   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:49.938623   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:50.774207   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:51.022535   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:51.148690   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:51.168786   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:51.170070   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:51.170150   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:51.342671   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:48.729168   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:50.729197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:52.729615   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:53.660805   61323 kubeadm.go:310] [api-check] The API server is healthy after 5.502700892s
	I0924 01:08:53.678006   61323 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:08:53.693676   61323 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:08:53.736910   61323 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:08:53.737186   61323 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-650507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:08:53.750738   61323 kubeadm.go:310] [bootstrap-token] Using token: 62empn.zvptxpa69xtioeo1
	I0924 01:08:49.139835   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.139859   61699 cri.go:89] found id: ""
	I0924 01:08:49.139869   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:49.139920   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.144770   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:49.144896   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:49.193710   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:49.193733   61699 cri.go:89] found id: ""
	I0924 01:08:49.193743   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:49.193798   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.198090   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:49.198178   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:49.240236   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:49.240309   61699 cri.go:89] found id: ""
	I0924 01:08:49.240344   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:49.240401   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.244573   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:49.244646   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:49.290954   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:49.290998   61699 cri.go:89] found id: ""
	I0924 01:08:49.291008   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:49.291083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.295602   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:49.295664   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:49.340871   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.340894   61699 cri.go:89] found id: ""
	I0924 01:08:49.340904   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:49.340964   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.345362   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:49.345433   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:49.387382   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.387408   61699 cri.go:89] found id: ""
	I0924 01:08:49.387418   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:49.387472   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.393388   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:49.393468   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:49.436082   61699 cri.go:89] found id: ""
	I0924 01:08:49.436107   61699 logs.go:276] 0 containers: []
	W0924 01:08:49.436119   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:49.436126   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:49.436187   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:49.490172   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:49.490197   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.490203   61699 cri.go:89] found id: ""
	I0924 01:08:49.490213   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:49.490273   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.495438   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.500506   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:49.500537   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.561240   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:49.561277   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.611765   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:49.611807   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.689366   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:49.689413   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:49.747233   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:49.747271   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:49.852723   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:49.852771   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:50.006274   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:50.006322   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:50.064786   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:50.064828   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:50.104831   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:50.104865   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:50.144962   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:50.144990   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:50.183923   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:50.183956   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:50.222382   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:50.222414   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:50.671849   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:50.671890   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.187450   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:08:53.193094   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:08:53.194414   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:08:53.194439   61699 api_server.go:131] duration metric: took 4.107783011s to wait for apiserver health ...
	I0924 01:08:53.194449   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:08:53.194479   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:53.194546   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:53.232560   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:53.232584   61699 cri.go:89] found id: ""
	I0924 01:08:53.232594   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:53.232649   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.236611   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:53.236671   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:53.278194   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.278223   61699 cri.go:89] found id: ""
	I0924 01:08:53.278233   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:53.278291   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.283330   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:53.283391   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:53.322371   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.322399   61699 cri.go:89] found id: ""
	I0924 01:08:53.322408   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:53.322459   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.326794   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:53.326869   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:53.364035   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.364064   61699 cri.go:89] found id: ""
	I0924 01:08:53.364075   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:53.364140   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.368065   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:53.368127   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:53.405651   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.405679   61699 cri.go:89] found id: ""
	I0924 01:08:53.405687   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:53.405754   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.410451   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:53.410537   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:53.451079   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:53.451111   61699 cri.go:89] found id: ""
	I0924 01:08:53.451121   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:53.451183   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.456272   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:53.456367   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:53.497323   61699 cri.go:89] found id: ""
	I0924 01:08:53.497360   61699 logs.go:276] 0 containers: []
	W0924 01:08:53.497373   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:53.497387   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:53.497461   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:53.536017   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:53.536040   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:53.536046   61699 cri.go:89] found id: ""
	I0924 01:08:53.536055   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:53.536114   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.542413   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.546559   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:53.546592   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.560292   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:53.560323   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:53.684820   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:53.684849   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.734483   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:53.734519   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.780676   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:53.780705   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:53.855917   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:53.855960   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.906926   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:53.906962   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.953992   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:53.954019   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:54.031302   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:54.031350   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:54.073918   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:54.073958   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:54.108724   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:54.108765   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:53.752460   61323 out.go:235]   - Configuring RBAC rules ...
	I0924 01:08:53.752626   61323 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:08:53.758889   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:08:53.767101   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:08:53.770943   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:08:53.775335   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:08:53.792963   61323 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:08:54.070193   61323 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:08:54.526226   61323 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:08:55.069668   61323 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:08:55.070678   61323 kubeadm.go:310] 
	I0924 01:08:55.070751   61323 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:08:55.070761   61323 kubeadm.go:310] 
	I0924 01:08:55.070844   61323 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:08:55.070860   61323 kubeadm.go:310] 
	I0924 01:08:55.070910   61323 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:08:55.070998   61323 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:08:55.071064   61323 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:08:55.071074   61323 kubeadm.go:310] 
	I0924 01:08:55.071138   61323 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:08:55.071159   61323 kubeadm.go:310] 
	I0924 01:08:55.071210   61323 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:08:55.071217   61323 kubeadm.go:310] 
	I0924 01:08:55.071298   61323 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:08:55.071428   61323 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:08:55.071525   61323 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:08:55.071535   61323 kubeadm.go:310] 
	I0924 01:08:55.071640   61323 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:08:55.071721   61323 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:08:55.071738   61323 kubeadm.go:310] 
	I0924 01:08:55.071815   61323 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.071941   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:08:55.071971   61323 kubeadm.go:310] 	--control-plane 
	I0924 01:08:55.071984   61323 kubeadm.go:310] 
	I0924 01:08:55.072089   61323 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:08:55.072098   61323 kubeadm.go:310] 
	I0924 01:08:55.072198   61323 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.072324   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:08:55.073807   61323 kubeadm.go:310] W0924 01:08:46.350959    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074118   61323 kubeadm.go:310] W0924 01:08:46.352529    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074256   61323 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:08:55.074295   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:08:55.074312   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:08:55.076241   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:08:55.077630   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:08:55.088658   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:08:55.106396   61323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:08:55.106491   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.106579   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-650507 minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=embed-certs-650507 minikube.k8s.io/primary=true
	I0924 01:08:55.138376   61323 ops.go:34] apiserver oom_adj: -16
	I0924 01:08:51.344458   61989 out.go:235]   - Booting up control plane ...
	I0924 01:08:51.344607   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:51.353468   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:51.356949   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:51.358082   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:51.364468   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:08:54.501805   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:54.501847   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:54.548768   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:54.548800   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:57.105661   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:08:57.105688   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.105693   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.105697   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.105703   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.105706   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.105709   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.105715   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.105722   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.105729   61699 system_pods.go:74] duration metric: took 3.911274774s to wait for pod list to return data ...
	I0924 01:08:57.105736   61699 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:08:57.108031   61699 default_sa.go:45] found service account: "default"
	I0924 01:08:57.108051   61699 default_sa.go:55] duration metric: took 2.307712ms for default service account to be created ...
	I0924 01:08:57.108059   61699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:08:57.112551   61699 system_pods.go:86] 8 kube-system pods found
	I0924 01:08:57.112578   61699 system_pods.go:89] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.112584   61699 system_pods.go:89] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.112589   61699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.112593   61699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.112597   61699 system_pods.go:89] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.112600   61699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.112608   61699 system_pods.go:89] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.112613   61699 system_pods.go:89] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.112619   61699 system_pods.go:126] duration metric: took 4.555185ms to wait for k8s-apps to be running ...
	I0924 01:08:57.112625   61699 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:08:57.112665   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:57.127805   61699 system_svc.go:56] duration metric: took 15.170368ms WaitForService to wait for kubelet
	I0924 01:08:57.127839   61699 kubeadm.go:582] duration metric: took 4m23.691287248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:08:57.127865   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:08:57.130964   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:08:57.130994   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:08:57.131008   61699 node_conditions.go:105] duration metric: took 3.13793ms to run NodePressure ...
	I0924 01:08:57.131021   61699 start.go:241] waiting for startup goroutines ...
	I0924 01:08:57.131029   61699 start.go:246] waiting for cluster config update ...
	I0924 01:08:57.131043   61699 start.go:255] writing updated cluster config ...
	I0924 01:08:57.131388   61699 ssh_runner.go:195] Run: rm -f paused
	I0924 01:08:57.182238   61699 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:08:57.185023   61699 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-465341" cluster and "default" namespace by default
	I0924 01:08:55.229370   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:57.729448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:55.285390   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.785813   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.285570   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.785779   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.285599   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.786401   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.285583   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.786037   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.286404   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.447075   61323 kubeadm.go:1113] duration metric: took 4.340646509s to wait for elevateKubeSystemPrivileges
	I0924 01:08:59.447119   61323 kubeadm.go:394] duration metric: took 4m57.777127509s to StartCluster
	I0924 01:08:59.447141   61323 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.447229   61323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:08:59.449766   61323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.450091   61323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:08:59.450191   61323 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:08:59.450308   61323 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650507"
	I0924 01:08:59.450330   61323 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-650507"
	W0924 01:08:59.450343   61323 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:08:59.450346   61323 addons.go:69] Setting metrics-server=true in profile "embed-certs-650507"
	I0924 01:08:59.450349   61323 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650507"
	I0924 01:08:59.450366   61323 addons.go:234] Setting addon metrics-server=true in "embed-certs-650507"
	W0924 01:08:59.450374   61323 addons.go:243] addon metrics-server should already be in state true
	I0924 01:08:59.450328   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:08:59.450381   61323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650507"
	I0924 01:08:59.450404   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450375   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450718   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450770   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450805   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450808   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450895   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450842   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.451862   61323 out.go:177] * Verifying Kubernetes components...
	I0924 01:08:59.453214   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:08:59.471878   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0924 01:08:59.472083   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0924 01:08:59.472239   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0924 01:08:59.472586   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472704   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472988   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.473187   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473205   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473226   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473242   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473418   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473433   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474003   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.474116   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474383   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474422   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.474591   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474628   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.478726   61323 addons.go:234] Setting addon default-storageclass=true in "embed-certs-650507"
	W0924 01:08:59.478753   61323 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:08:59.478784   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.479137   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.479186   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.495021   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I0924 01:08:59.495527   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.496068   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.496090   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.496519   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.496734   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.498472   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.498528   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0924 01:08:59.498971   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.499485   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.499498   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.499794   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.499918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.500899   61323 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:08:59.501731   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.502154   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:08:59.502172   61323 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:08:59.502186   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.503238   61323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:08:59.504765   61323 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.504783   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:08:59.504801   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.505483   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0924 01:08:59.505882   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.506386   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.506408   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.506841   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.507463   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.507505   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.511098   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511611   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.511645   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511944   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.512127   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.512296   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.512493   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.514595   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515144   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.515173   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515481   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.515749   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.515963   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.516100   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.529920   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0924 01:08:59.530565   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.531102   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.531125   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.531629   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.531918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.533741   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.533992   61323 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.534007   61323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:08:59.534026   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.537032   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537488   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.537515   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537713   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.537919   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.538074   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.538198   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.680683   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:08:59.711414   61323 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721234   61323 node_ready.go:49] node "embed-certs-650507" has status "Ready":"True"
	I0924 01:08:59.721264   61323 node_ready.go:38] duration metric: took 9.820004ms for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721275   61323 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:59.736353   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:08:59.831004   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:08:59.831041   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:08:59.871681   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.873844   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.902662   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:08:59.902691   61323 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:08:59.956425   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:08:59.956454   61323 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:08:59.997902   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:09:01.146340   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.27245536s)
	I0924 01:09:01.146470   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146505   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146403   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.274685832s)
	I0924 01:09:01.146582   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146602   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146819   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146848   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146868   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146875   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.146882   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146892   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146967   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146990   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147007   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.147023   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.147084   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.147117   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147133   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147370   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147378   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180574   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.180604   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.180929   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180977   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.180986   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.207538   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209569759s)
	I0924 01:09:01.207600   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.207616   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.207959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.208002   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208011   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208019   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.208028   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.208377   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208402   61323 addons.go:475] Verifying addon metrics-server=true in "embed-certs-650507"
	I0924 01:09:01.208411   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.210500   61323 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:08:59.731184   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:02.229737   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:01.211900   61323 addons.go:510] duration metric: took 1.761718139s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:09:01.751463   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.242260   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.728708   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.728878   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.243002   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:06.243030   61323 pod_ready.go:82] duration metric: took 6.506649267s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:06.243039   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:08.249949   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:09.750009   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.750037   61323 pod_ready.go:82] duration metric: took 3.506990291s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.750049   61323 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756600   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.756626   61323 pod_ready.go:82] duration metric: took 6.570047ms for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756635   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762209   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.762235   61323 pod_ready.go:82] duration metric: took 5.593257ms for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762248   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772052   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.772075   61323 pod_ready.go:82] duration metric: took 9.818627ms for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772088   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777733   61323 pod_ready.go:93] pod "kube-proxy-mwtkg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.777765   61323 pod_ready.go:82] duration metric: took 5.669609ms for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777778   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146804   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:10.146833   61323 pod_ready.go:82] duration metric: took 369.046066ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146844   61323 pod_ready.go:39] duration metric: took 10.425557831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:09:10.146861   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:09:10.146918   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:09:10.162335   61323 api_server.go:72] duration metric: took 10.712204486s to wait for apiserver process to appear ...
	I0924 01:09:10.162360   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:09:10.162381   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:09:10.166693   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:09:10.167700   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:09:10.167723   61323 api_server.go:131] duration metric: took 5.355716ms to wait for apiserver health ...
	I0924 01:09:10.167734   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:09:10.351584   61323 system_pods.go:59] 9 kube-system pods found
	I0924 01:09:10.351621   61323 system_pods.go:61] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.351629   61323 system_pods.go:61] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.351634   61323 system_pods.go:61] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.351640   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.351645   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.351650   61323 system_pods.go:61] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.351655   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.351669   61323 system_pods.go:61] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.351678   61323 system_pods.go:61] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.351692   61323 system_pods.go:74] duration metric: took 183.950994ms to wait for pod list to return data ...
	I0924 01:09:10.351704   61323 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:09:10.547564   61323 default_sa.go:45] found service account: "default"
	I0924 01:09:10.547595   61323 default_sa.go:55] duration metric: took 195.882659ms for default service account to be created ...
	I0924 01:09:10.547605   61323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:09:10.750290   61323 system_pods.go:86] 9 kube-system pods found
	I0924 01:09:10.750327   61323 system_pods.go:89] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.750336   61323 system_pods.go:89] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.750344   61323 system_pods.go:89] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.750352   61323 system_pods.go:89] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.750359   61323 system_pods.go:89] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.750366   61323 system_pods.go:89] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.750372   61323 system_pods.go:89] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.750382   61323 system_pods.go:89] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.750391   61323 system_pods.go:89] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.750407   61323 system_pods.go:126] duration metric: took 202.795975ms to wait for k8s-apps to be running ...
	I0924 01:09:10.750416   61323 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:09:10.750476   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:09:10.765539   61323 system_svc.go:56] duration metric: took 15.112281ms WaitForService to wait for kubelet
	I0924 01:09:10.765569   61323 kubeadm.go:582] duration metric: took 11.31544538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:09:10.765588   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:09:10.947628   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:09:10.947654   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:09:10.947664   61323 node_conditions.go:105] duration metric: took 182.072269ms to run NodePressure ...
	I0924 01:09:10.947674   61323 start.go:241] waiting for startup goroutines ...
	I0924 01:09:10.947681   61323 start.go:246] waiting for cluster config update ...
	I0924 01:09:10.947691   61323 start.go:255] writing updated cluster config ...
	I0924 01:09:10.947955   61323 ssh_runner.go:195] Run: rm -f paused
	I0924 01:09:10.999208   61323 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:09:11.001392   61323 out.go:177] * Done! kubectl is now configured to use "embed-certs-650507" cluster and "default" namespace by default
	I0924 01:09:08.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:11.229036   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:13.727852   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:16.229362   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:18.727643   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:20.729183   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:22.731323   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:25.228514   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:27.728747   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:29.729150   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:32.228197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:31.365725   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:09:31.366444   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:31.366704   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:34.729441   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:37.228766   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:36.367209   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:36.367654   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:39.728035   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:41.729148   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:43.729240   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.228006   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:48.228134   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.367945   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:46.368128   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:50.228455   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:52.228646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:54.229158   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:56.727712   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:58.728522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:00.728964   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:02.729909   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:05.227781   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:07.228729   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:06.368912   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:06.369182   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:09.728977   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:10.222284   61070 pod_ready.go:82] duration metric: took 4m0.000274516s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:10:10.222354   61070 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:10:10.222381   61070 pod_ready.go:39] duration metric: took 4m12.043944079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:10.222412   61070 kubeadm.go:597] duration metric: took 4m56.454037737s to restartPrimaryControlPlane
	W0924 01:10:10.222488   61070 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:10:10.222536   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:36.533302   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.310734731s)
	I0924 01:10:36.533377   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:36.556961   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:10:36.568298   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:36.584098   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:36.584121   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:36.584178   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:36.594153   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:36.594218   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:36.612646   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:36.626433   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:36.626506   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:36.636161   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.654017   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:36.654075   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.663760   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:36.673737   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:36.673799   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:36.684005   61070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:36.731568   61070 kubeadm.go:310] W0924 01:10:36.713557    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.733592   61070 kubeadm.go:310] W0924 01:10:36.715675    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.850767   61070 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:45.349295   61070 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:10:45.349386   61070 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:45.349486   61070 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:45.349600   61070 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:45.349733   61070 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:10:45.349836   61070 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:45.351746   61070 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:45.351843   61070 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:45.351939   61070 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:45.352055   61070 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:45.352160   61070 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:45.352228   61070 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:45.352297   61070 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:45.352392   61070 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:45.352477   61070 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:45.352551   61070 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:45.352664   61070 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:45.352734   61070 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:45.352904   61070 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:45.352956   61070 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:45.353038   61070 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:10:45.353127   61070 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:45.353209   61070 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:45.353300   61070 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:45.353372   61070 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:45.353446   61070 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.354948   61070 out.go:235]   - Booting up control plane ...
	I0924 01:10:45.355023   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:45.355090   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:45.355144   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:45.355226   61070 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:45.355310   61070 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:45.355356   61070 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:45.355476   61070 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:10:45.355585   61070 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:10:45.355658   61070 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537437s
	I0924 01:10:45.355728   61070 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:10:45.355807   61070 kubeadm.go:310] [api-check] The API server is healthy after 5.002387582s
	I0924 01:10:45.355955   61070 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:10:45.356129   61070 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:10:45.356230   61070 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:10:45.356516   61070 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-674057 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:10:45.356571   61070 kubeadm.go:310] [bootstrap-token] Using token: g2v97n.iz49hjb4wh5cfkiq
	I0924 01:10:45.358203   61070 out.go:235]   - Configuring RBAC rules ...
	I0924 01:10:45.358333   61070 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:10:45.358439   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:10:45.358562   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:10:45.358667   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:10:45.358773   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:10:45.358851   61070 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:10:45.358997   61070 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:10:45.359059   61070 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:10:45.359101   61070 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:10:45.359111   61070 kubeadm.go:310] 
	I0924 01:10:45.359164   61070 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:10:45.359171   61070 kubeadm.go:310] 
	I0924 01:10:45.359263   61070 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:10:45.359280   61070 kubeadm.go:310] 
	I0924 01:10:45.359309   61070 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:10:45.359387   61070 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:10:45.359458   61070 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:10:45.359471   61070 kubeadm.go:310] 
	I0924 01:10:45.359559   61070 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:10:45.359568   61070 kubeadm.go:310] 
	I0924 01:10:45.359613   61070 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:10:45.359619   61070 kubeadm.go:310] 
	I0924 01:10:45.359665   61070 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:10:45.359728   61070 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:10:45.359800   61070 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:10:45.359813   61070 kubeadm.go:310] 
	I0924 01:10:45.359879   61070 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:10:45.359978   61070 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:10:45.359996   61070 kubeadm.go:310] 
	I0924 01:10:45.360101   61070 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360218   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:10:45.360251   61070 kubeadm.go:310] 	--control-plane 
	I0924 01:10:45.360258   61070 kubeadm.go:310] 
	I0924 01:10:45.360453   61070 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:10:45.360481   61070 kubeadm.go:310] 
	I0924 01:10:45.360595   61070 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360693   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:10:45.360706   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:10:45.360713   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:10:45.362153   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:10:46.371109   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:46.371309   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371318   61989 kubeadm.go:310] 
	I0924 01:10:46.371352   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:10:46.371455   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:10:46.371490   61989 kubeadm.go:310] 
	I0924 01:10:46.371546   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:10:46.371592   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:10:46.371734   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:10:46.371751   61989 kubeadm.go:310] 
	I0924 01:10:46.371888   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:10:46.371936   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:10:46.371978   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:10:46.371988   61989 kubeadm.go:310] 
	I0924 01:10:46.372124   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:10:46.372253   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:10:46.372262   61989 kubeadm.go:310] 
	I0924 01:10:46.372442   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:10:46.372578   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:10:46.372680   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:10:46.372756   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:10:46.372765   61989 kubeadm.go:310] 
	I0924 01:10:46.373578   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:46.373675   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:10:46.373790   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 01:10:46.373938   61989 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 01:10:46.373987   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:46.834432   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:46.851214   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:46.862648   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:46.862675   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:46.862733   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:46.873005   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:46.873073   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:46.884007   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:46.893944   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:46.894016   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:46.905036   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.914953   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:46.915024   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.924881   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:46.935132   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:46.935192   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:46.945837   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:47.018713   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:10:47.018861   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:47.159920   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:47.160042   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:47.160168   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:10:47.349360   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:47.351645   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:47.351763   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:47.351918   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:47.352040   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:47.352118   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:47.352205   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:47.352298   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:47.352401   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:47.352481   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:47.352574   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:47.352662   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:47.352705   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:47.352767   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:47.467301   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:47.622085   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:47.726807   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:47.951249   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:47.973392   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:47.974396   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:47.974440   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:48.127629   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.363348   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:10:45.374505   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:10:45.391838   61070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:10:45.391947   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:45.391999   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-674057 minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=no-preload-674057 minikube.k8s.io/primary=true
	I0924 01:10:45.583482   61070 ops.go:34] apiserver oom_adj: -16
	I0924 01:10:45.583498   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.083831   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.583990   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.084184   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.583925   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.083775   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.583654   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.084305   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.584636   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.084620   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.226320   61070 kubeadm.go:1113] duration metric: took 4.834429832s to wait for elevateKubeSystemPrivileges
	I0924 01:10:50.226363   61070 kubeadm.go:394] duration metric: took 5m36.514145334s to StartCluster
	I0924 01:10:50.226386   61070 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.226480   61070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:10:50.229196   61070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.229530   61070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:10:50.229600   61070 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:10:50.229703   61070 addons.go:69] Setting storage-provisioner=true in profile "no-preload-674057"
	I0924 01:10:50.229725   61070 addons.go:234] Setting addon storage-provisioner=true in "no-preload-674057"
	W0924 01:10:50.229733   61070 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:10:50.229735   61070 addons.go:69] Setting default-storageclass=true in profile "no-preload-674057"
	I0924 01:10:50.229756   61070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-674057"
	I0924 01:10:50.229764   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.229789   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:10:50.229781   61070 addons.go:69] Setting metrics-server=true in profile "no-preload-674057"
	I0924 01:10:50.229847   61070 addons.go:234] Setting addon metrics-server=true in "no-preload-674057"
	W0924 01:10:50.229855   61070 addons.go:243] addon metrics-server should already be in state true
	I0924 01:10:50.229871   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.230228   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230268   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230320   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230351   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230355   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230389   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.231531   61070 out.go:177] * Verifying Kubernetes components...
	I0924 01:10:50.233506   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:10:50.252485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0924 01:10:50.252844   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0924 01:10:50.253068   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.253217   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0924 01:10:50.253653   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.253676   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.253705   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254050   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254203   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254236   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254250   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.254591   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254814   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.254829   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254851   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.254864   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.254984   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.255389   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.255983   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.256028   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.258757   61070 addons.go:234] Setting addon default-storageclass=true in "no-preload-674057"
	W0924 01:10:50.258781   61070 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:10:50.258861   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.259186   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.259237   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.276636   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0924 01:10:50.276806   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0924 01:10:50.277196   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277312   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277771   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.277795   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278022   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.278044   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278213   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278380   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.278485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0924 01:10:50.278806   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278877   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.279106   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.279244   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.279260   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.279668   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.280215   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.280263   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.280315   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.281796   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.282123   61070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:10:50.283561   61070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:10:48.129312   61989 out.go:235]   - Booting up control plane ...
	I0924 01:10:48.129446   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:48.139821   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:48.143120   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:48.144038   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:48.146275   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:10:50.283656   61070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.283674   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:10:50.283688   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.284778   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:10:50.284793   61070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:10:50.284811   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.288106   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288477   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.288498   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288524   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288679   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.288867   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289019   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.289185   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.289309   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.289338   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.289613   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.289773   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289938   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.290073   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.323722   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0924 01:10:50.324221   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.324873   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.324901   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.325334   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.325572   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.327779   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.328071   61070 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.328092   61070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:10:50.328119   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.331721   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.331988   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.332022   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.332209   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.332455   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.332658   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.332837   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.471507   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:10:50.502289   61070 node_ready.go:35] waiting up to 6m0s for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522752   61070 node_ready.go:49] node "no-preload-674057" has status "Ready":"True"
	I0924 01:10:50.522784   61070 node_ready.go:38] duration metric: took 20.46398ms for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522797   61070 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:50.537297   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:50.576703   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.638655   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:10:50.638679   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:10:50.673535   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.691443   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:10:50.691475   61070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:10:50.791572   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:50.791596   61070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:10:50.887143   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:51.535179   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535211   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535247   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535269   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535531   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535553   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535563   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535571   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535572   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535584   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535591   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535598   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535809   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535830   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.536069   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.536078   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.536088   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.563511   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.563537   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.563856   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.563880   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.800860   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.800889   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801192   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801211   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801224   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.801233   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801527   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.801559   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801567   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801577   61070 addons.go:475] Verifying addon metrics-server=true in "no-preload-674057"
	I0924 01:10:51.803735   61070 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:10:51.805581   61070 addons.go:510] duration metric: took 1.575985263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:10:52.544028   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:53.564056   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.564089   61070 pod_ready.go:82] duration metric: took 3.026767371s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.564102   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573039   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.573076   61070 pod_ready.go:82] duration metric: took 8.965144ms for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573090   61070 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081080   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.081105   61070 pod_ready.go:82] duration metric: took 508.007072ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081115   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087054   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.087079   61070 pod_ready.go:82] duration metric: took 5.957569ms for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087091   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094018   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.094043   61070 pod_ready.go:82] duration metric: took 6.944048ms for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094053   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341307   61070 pod_ready.go:93] pod "kube-proxy-k54d7" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.341326   61070 pod_ready.go:82] duration metric: took 247.267987ms for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341335   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741702   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.741732   61070 pod_ready.go:82] duration metric: took 400.389532ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741742   61070 pod_ready.go:39] duration metric: took 4.218931841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:54.741759   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:10:54.741827   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:10:54.758692   61070 api_server.go:72] duration metric: took 4.529120201s to wait for apiserver process to appear ...
	I0924 01:10:54.758723   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:10:54.758744   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:10:54.764587   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:10:54.765620   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:10:54.765639   61070 api_server.go:131] duration metric: took 6.909845ms to wait for apiserver health ...
	I0924 01:10:54.765646   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:10:54.945080   61070 system_pods.go:59] 9 kube-system pods found
	I0924 01:10:54.945121   61070 system_pods.go:61] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:54.945128   61070 system_pods.go:61] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:54.945134   61070 system_pods.go:61] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:54.945140   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:54.945145   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:54.945150   61070 system_pods.go:61] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:54.945161   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:54.945172   61070 system_pods.go:61] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:54.945180   61070 system_pods.go:61] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:54.945191   61070 system_pods.go:74] duration metric: took 179.539019ms to wait for pod list to return data ...
	I0924 01:10:54.945205   61070 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:10:55.141944   61070 default_sa.go:45] found service account: "default"
	I0924 01:10:55.141973   61070 default_sa.go:55] duration metric: took 196.760922ms for default service account to be created ...
	I0924 01:10:55.141984   61070 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:10:55.344235   61070 system_pods.go:86] 9 kube-system pods found
	I0924 01:10:55.344273   61070 system_pods.go:89] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:55.344282   61070 system_pods.go:89] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:55.344288   61070 system_pods.go:89] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:55.344293   61070 system_pods.go:89] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:55.344297   61070 system_pods.go:89] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:55.344301   61070 system_pods.go:89] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:55.344304   61070 system_pods.go:89] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:55.344310   61070 system_pods.go:89] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:55.344315   61070 system_pods.go:89] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:55.344324   61070 system_pods.go:126] duration metric: took 202.334823ms to wait for k8s-apps to be running ...
	I0924 01:10:55.344361   61070 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:10:55.344406   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:55.361050   61070 system_svc.go:56] duration metric: took 16.6812ms WaitForService to wait for kubelet
	I0924 01:10:55.361082   61070 kubeadm.go:582] duration metric: took 5.13151522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:10:55.361104   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:10:55.541766   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:10:55.541799   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:10:55.541812   61070 node_conditions.go:105] duration metric: took 180.702708ms to run NodePressure ...
	I0924 01:10:55.541826   61070 start.go:241] waiting for startup goroutines ...
	I0924 01:10:55.541837   61070 start.go:246] waiting for cluster config update ...
	I0924 01:10:55.541850   61070 start.go:255] writing updated cluster config ...
	I0924 01:10:55.542100   61070 ssh_runner.go:195] Run: rm -f paused
	I0924 01:10:55.590629   61070 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:10:55.592850   61070 out.go:177] * Done! kubectl is now configured to use "no-preload-674057" cluster and "default" namespace by default
	I0924 01:11:28.148929   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:11:28.149086   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:28.149360   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:33.150102   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:33.150283   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:43.151281   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:43.151540   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:03.152338   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:03.152562   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151221   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:43.151503   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151532   61989 kubeadm.go:310] 
	I0924 01:12:43.151585   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:12:43.151645   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:12:43.151655   61989 kubeadm.go:310] 
	I0924 01:12:43.151729   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:12:43.151779   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:12:43.151940   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:12:43.151954   61989 kubeadm.go:310] 
	I0924 01:12:43.152095   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:12:43.152154   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:12:43.152201   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:12:43.152207   61989 kubeadm.go:310] 
	I0924 01:12:43.152294   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:12:43.152411   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:12:43.152424   61989 kubeadm.go:310] 
	I0924 01:12:43.152565   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:12:43.152653   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:12:43.152718   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:12:43.152794   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:12:43.152802   61989 kubeadm.go:310] 
	I0924 01:12:43.153600   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:12:43.153699   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:12:43.153757   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 01:12:43.153808   61989 kubeadm.go:394] duration metric: took 7m57.944266289s to StartCluster
	I0924 01:12:43.153845   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:12:43.153894   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:12:43.199866   61989 cri.go:89] found id: ""
	I0924 01:12:43.199896   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.199908   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:12:43.199916   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:12:43.199975   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:12:43.235387   61989 cri.go:89] found id: ""
	I0924 01:12:43.235420   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.235432   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:12:43.235441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:12:43.235513   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:12:43.271255   61989 cri.go:89] found id: ""
	I0924 01:12:43.271290   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.271303   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:12:43.271312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:12:43.271380   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:12:43.305842   61989 cri.go:89] found id: ""
	I0924 01:12:43.305870   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.305882   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:12:43.305891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:12:43.305952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:12:43.341956   61989 cri.go:89] found id: ""
	I0924 01:12:43.341983   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.342005   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:12:43.342013   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:12:43.342093   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:12:43.376362   61989 cri.go:89] found id: ""
	I0924 01:12:43.376399   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.376421   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:12:43.376431   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:12:43.376487   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:12:43.409351   61989 cri.go:89] found id: ""
	I0924 01:12:43.409378   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.409387   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:12:43.409392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:12:43.409459   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:12:43.442446   61989 cri.go:89] found id: ""
	I0924 01:12:43.442479   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.442487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:12:43.442497   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:12:43.442507   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:12:43.498980   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:12:43.499020   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:12:43.520090   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:12:43.520120   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:12:43.612212   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:12:43.612242   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:12:43.612255   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:12:43.727355   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:12:43.727395   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 01:12:43.770163   61989 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 01:12:43.770217   61989 out.go:270] * 
	W0924 01:12:43.770282   61989 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.770297   61989 out.go:270] * 
	W0924 01:12:43.771298   61989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:12:43.775708   61989 out.go:201] 
	W0924 01:12:43.777139   61989 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.777186   61989 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 01:12:43.777214   61989 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 01:12:43.779580   61989 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.769713302Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:956eecba9ada0be0d00755a1626d07704cb18a4f903cd97cf8eef59b18ef2f21,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-w5j2x,Uid:57fd868f-ab5c-495a-869a-45e8f81f4014,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140251975605888,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-w5j2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57fd868f-ab5c-495a-869a-45e8f81f4014,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:10:51.665704933Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140251830635763,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-24T01:10:51.523508303Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&PodSandboxMetadata{Name:kube-proxy-k54d7,Uid:b67ac411-52b5-4d58-9db3-d2d92b63a21f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140250251338661,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:10:49.328295089Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-x7cv6,Uid
:9e96941a-b045-48e2-be06-50cc29f8ec25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140250140106543,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:10:49.830821942Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nqwzr,Uid:9773e4bf-9848-47d8-b87b-897fbdd22d42,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140250115810171,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9773e4bf-9848-47d8-b87b-897fbdd22d42,k8s-app: kube-dns,pod-templat
e-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:10:49.807079058Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-674057,Uid:c7de31ffdfb48cb7290a847c86901da6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140239189108118,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.161:2379,kubernetes.io/config.hash: c7de31ffdfb48cb7290a847c86901da6,kubernetes.io/config.seen: 2024-09-24T01:10:38.730428448Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Met
adata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-674057,Uid:abc52729a304907dc88bd3e55458bb01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140239185550366,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: abc52729a304907dc88bd3e55458bb01,kubernetes.io/config.seen: 2024-09-24T01:10:38.730434607Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-674057,Uid:96541f6d2312e39b9e24036ad99634a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140239174123989,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 96541f6d2312e39b9e24036ad99634a2,kubernetes.io/config.seen: 2024-09-24T01:10:38.730435470Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-674057,Uid:fa7656a22c606fc5e77123d16ca79be6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727140239173333631,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.161:844
3,kubernetes.io/config.hash: fa7656a22c606fc5e77123d16ca79be6,kubernetes.io/config.seen: 2024-09-24T01:10:38.730433160Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-674057,Uid:fa7656a22c606fc5e77123d16ca79be6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727139916192402444,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.161:8443,kubernetes.io/config.hash: fa7656a22c606fc5e77123d16ca79be6,kubernetes.io/config.seen: 2024-09-24T01:05:15.696200707Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=0babd6e0-5d93-4a9e-955a-57e24f7af383 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.770494476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7c150f3-bf56-4a79-b3ca-39a265ccf239 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.770565710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7c150f3-bf56-4a79-b3ca-39a265ccf239 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.770798970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7c150f3-bf56-4a79-b3ca-39a265ccf239 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.781401065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=899cd1a5-1b48-4bb9-9140-07d9f2833a08 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.781482200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=899cd1a5-1b48-4bb9-9140-07d9f2833a08 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.782620113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e5ffb97-6fd3-4919-a37d-5711c0212d90 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.782950679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140797782930851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e5ffb97-6fd3-4919-a37d-5711c0212d90 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.783701491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3da2fcac-b41b-4712-82c7-0fe343437aa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.783765918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3da2fcac-b41b-4712-82c7-0fe343437aa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.784069249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3da2fcac-b41b-4712-82c7-0fe343437aa6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.823647137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b84bb35-0567-4850-ad77-e89d6a815966 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.823752271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b84bb35-0567-4850-ad77-e89d6a815966 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.824926288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5badd028-a950-4eb1-9c2c-b0c9ffafda2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.825382766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140797825358542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5badd028-a950-4eb1-9c2c-b0c9ffafda2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.825779181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dabaeefe-608d-4c08-a4c8-92774d84e41d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.825878422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dabaeefe-608d-4c08-a4c8-92774d84e41d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.826188194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dabaeefe-608d-4c08-a4c8-92774d84e41d name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.859979836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1c6f15c-c47f-4224-bb13-f41b2c932631 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.860108457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1c6f15c-c47f-4224-bb13-f41b2c932631 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.861097173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2792060a-dbd4-4839-a631-f0da44fc5b25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.861459337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140797861438267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2792060a-dbd4-4839-a631-f0da44fc5b25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.862085637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed1c15ca-6542-45e5-ae09-683366ab74ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.862152658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed1c15ca-6542-45e5-ae09-683366ab74ef name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:19:57 no-preload-674057 crio[713]: time="2024-09-24 01:19:57.862360102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed1c15ca-6542-45e5-ae09-683366ab74ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	edf08e56311a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   57b40fcbd0807       storage-provisioner
	db4bc3c13ebdb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   fed5e74c9deb3       coredns-7c65d6cfc9-x7cv6
	7ef6d000d3e5e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   f3501188d9975       coredns-7c65d6cfc9-nqwzr
	744c86dbbd3bf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   65ebe9c8dd9a0       kube-proxy-k54d7
	0e6059401f3f3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   0e77ff21d732e       kube-scheduler-no-preload-674057
	acbd654a5d68d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   fab3a8a805035       etcd-no-preload-674057
	617bb5bd9dd23       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   3                   00f003002a73a       kube-controller-manager-no-preload-674057
	e73012bfbb327       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            3                   2a2c0e8c2b5e8       kube-apiserver-no-preload-674057
	c7b997752647b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            2                   c3e3133288637       kube-apiserver-no-preload-674057
	
	
	==> coredns [7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-674057
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-674057
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=no-preload-674057
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 01:10:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-674057
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 01:19:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 01:16:00 +0000   Tue, 24 Sep 2024 01:10:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 01:16:00 +0000   Tue, 24 Sep 2024 01:10:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 01:16:00 +0000   Tue, 24 Sep 2024 01:10:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 01:16:00 +0000   Tue, 24 Sep 2024 01:10:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.161
	  Hostname:    no-preload-674057
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9bafe769c6c4631983d312dbb40b799
	  System UUID:                f9bafe76-9c6c-4631-983d-312dbb40b799
	  Boot ID:                    6e5d1535-fa44-4599-9002-65ba3216c402
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-nqwzr                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 coredns-7c65d6cfc9-x7cv6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 etcd-no-preload-674057                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-no-preload-674057             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-controller-manager-no-preload-674057    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-proxy-k54d7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-no-preload-674057             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 metrics-server-6867b74b74-w5j2x              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m20s (x8 over 9m20s)  kubelet          Node no-preload-674057 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x8 over 9m20s)  kubelet          Node no-preload-674057 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x7 over 9m20s)  kubelet          Node no-preload-674057 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m14s                  kubelet          Node no-preload-674057 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m14s                  kubelet          Node no-preload-674057 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m14s                  kubelet          Node no-preload-674057 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m10s                  node-controller  Node no-preload-674057 event: Registered Node no-preload-674057 in Controller
	
	
	==> dmesg <==
	[  +0.052968] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043521] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.099541] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.009811] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.540530] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.364632] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.057714] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064173] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.200241] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.137228] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.302198] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[Sep24 01:05] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.060516] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.274059] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +3.183009] kauditd_printk_skb: 87 callbacks suppressed
	[Sep24 01:06] kauditd_printk_skb: 88 callbacks suppressed
	[Sep24 01:10] systemd-fstab-generator[3120]: Ignoring "noauto" option for root device
	[  +0.059187] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.488274] systemd-fstab-generator[3447]: Ignoring "noauto" option for root device
	[  +0.080280] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.670829] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.169645] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[Sep24 01:11] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a] <==
	{"level":"info","ts":"2024-09-24T01:10:39.894813Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T01:10:39.895130Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3fbdf04b5b0eb504","initial-advertise-peer-urls":["https://192.168.50.161:2380"],"listen-peer-urls":["https://192.168.50.161:2380"],"advertise-client-urls":["https://192.168.50.161:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.161:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T01:10:39.895168Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T01:10:39.895246Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.161:2380"}
	{"level":"info","ts":"2024-09-24T01:10:39.895303Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.161:2380"}
	{"level":"info","ts":"2024-09-24T01:10:39.919127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T01:10:39.919331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T01:10:39.919423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgPreVoteResp from 3fbdf04b5b0eb504 at term 1"}
	{"level":"info","ts":"2024-09-24T01:10:39.919799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.919898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgVoteResp from 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.919928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.919982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3fbdf04b5b0eb504 elected leader 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.924212Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.929392Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3fbdf04b5b0eb504","local-member-attributes":"{Name:no-preload-674057 ClientURLs:[https://192.168.50.161:2379]}","request-path":"/0/members/3fbdf04b5b0eb504/attributes","cluster-id":"9aa7cd058091608f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T01:10:39.929540Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:10:39.930710Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:10:39.933722Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.161:2379"}
	{"level":"info","ts":"2024-09-24T01:10:39.936202Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9aa7cd058091608f","local-member-id":"3fbdf04b5b0eb504","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.936298Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.936343Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.936578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:10:39.938682Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:10:39.939564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T01:10:39.942096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T01:10:39.947587Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:19:58 up 15 min,  0 users,  load average: 0.14, 0.23, 0.21
	Linux no-preload-674057 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f] <==
	W0924 01:10:34.661096       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.770542       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.797509       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.836111       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.874696       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.899436       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.942687       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.010921       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.057684       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.062379       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.155977       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.181649       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.189376       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.204291       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.246814       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.262591       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.262822       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.295414       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.316840       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.441955       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.666436       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.918016       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.939316       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.949668       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:36.175893       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 01:15:42.969561       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:15:42.969630       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 01:15:42.970684       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:15:42.970724       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:16:42.971523       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:16:42.971590       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 01:16:42.971531       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:16:42.971646       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:16:42.972976       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:16:42.973052       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:18:42.974081       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 01:18:42.974110       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:18:42.974557       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0924 01:18:42.974644       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 01:18:42.975802       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:18:42.975936       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7] <==
	E0924 01:14:48.866129       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:14:49.428257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:15:18.872343       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:15:19.437819       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:15:48.879734       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:15:49.445552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:16:00.886388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-674057"
	E0924 01:16:18.885776       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:16:19.454211       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:16:48.892662       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:16:49.464179       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:16:55.695086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="290.874µs"
	I0924 01:17:08.697175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="199.899µs"
	E0924 01:17:18.899539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:17:19.471931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:17:48.906292       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:17:49.481770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:18:18.913231       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:18:19.489993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:18:48.919806       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:18:49.498766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:19:18.926916       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:19:19.506550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:19:48.934822       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:19:49.514102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 01:10:51.143123       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 01:10:51.194686       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.161"]
	E0924 01:10:51.194780       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 01:10:51.356189       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 01:10:51.356234       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 01:10:51.356258       1 server_linux.go:169] "Using iptables Proxier"
	I0924 01:10:51.401797       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 01:10:51.402155       1 server.go:483] "Version info" version="v1.31.1"
	I0924 01:10:51.402180       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:10:51.404004       1 config.go:199] "Starting service config controller"
	I0924 01:10:51.404113       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 01:10:51.404147       1 config.go:105] "Starting endpoint slice config controller"
	I0924 01:10:51.404164       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 01:10:51.405810       1 config.go:328] "Starting node config controller"
	I0924 01:10:51.405844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 01:10:51.504862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 01:10:51.504914       1 shared_informer.go:320] Caches are synced for service config
	I0924 01:10:51.506411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8] <==
	W0924 01:10:42.004614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 01:10:42.004639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.004803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:10:42.004867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.004990       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 01:10:42.005059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.852145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 01:10:42.852193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.894455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 01:10:42.894508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.912058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 01:10:42.912109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.953982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 01:10:42.954268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.974165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 01:10:42.974276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.999085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 01:10:42.999267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:43.067100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 01:10:43.067217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:43.309997       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 01:10:43.310878       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 01:10:43.332218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:10:43.332271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 01:10:45.472715       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 01:18:44 no-preload-674057 kubelet[3454]: E0924 01:18:44.830088    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140724829616551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:18:54 no-preload-674057 kubelet[3454]: E0924 01:18:54.832333    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140734832002582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:18:54 no-preload-674057 kubelet[3454]: E0924 01:18:54.833158    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140734832002582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:18:58 no-preload-674057 kubelet[3454]: E0924 01:18:58.680840    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:19:04 no-preload-674057 kubelet[3454]: E0924 01:19:04.834787    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140744834329068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:04 no-preload-674057 kubelet[3454]: E0924 01:19:04.834856    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140744834329068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:12 no-preload-674057 kubelet[3454]: E0924 01:19:12.680432    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:19:14 no-preload-674057 kubelet[3454]: E0924 01:19:14.838270    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140754837864287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:14 no-preload-674057 kubelet[3454]: E0924 01:19:14.838333    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140754837864287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:24 no-preload-674057 kubelet[3454]: E0924 01:19:24.841742    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140764839617445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:24 no-preload-674057 kubelet[3454]: E0924 01:19:24.842275    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140764839617445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:26 no-preload-674057 kubelet[3454]: E0924 01:19:26.680489    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:19:34 no-preload-674057 kubelet[3454]: E0924 01:19:34.845179    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140774844730507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:34 no-preload-674057 kubelet[3454]: E0924 01:19:34.845268    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140774844730507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:41 no-preload-674057 kubelet[3454]: E0924 01:19:41.680876    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:19:44 no-preload-674057 kubelet[3454]: E0924 01:19:44.720085    3454 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 01:19:44 no-preload-674057 kubelet[3454]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 01:19:44 no-preload-674057 kubelet[3454]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 01:19:44 no-preload-674057 kubelet[3454]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 01:19:44 no-preload-674057 kubelet[3454]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 01:19:44 no-preload-674057 kubelet[3454]: E0924 01:19:44.846446    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140784846170522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:44 no-preload-674057 kubelet[3454]: E0924 01:19:44.846529    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140784846170522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:52 no-preload-674057 kubelet[3454]: E0924 01:19:52.680513    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:19:54 no-preload-674057 kubelet[3454]: E0924 01:19:54.847836    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140794847513913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:19:54 no-preload-674057 kubelet[3454]: E0924 01:19:54.847861    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140794847513913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a] <==
	I0924 01:10:52.097886       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 01:10:52.110113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 01:10:52.110199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 01:10:52.145284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 01:10:52.145438       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-674057_e6022092-b597-4237-8623-89f31e133c06!
	I0924 01:10:52.146662       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23dbfa5e-f111-467a-8bd0-0b4f1c87cad7", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-674057_e6022092-b597-4237-8623-89f31e133c06 became leader
	I0924 01:10:52.245594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-674057_e6022092-b597-4237-8623-89f31e133c06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-674057 -n no-preload-674057
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-674057 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w5j2x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-674057 describe pod metrics-server-6867b74b74-w5j2x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-674057 describe pod metrics-server-6867b74b74-w5j2x: exit status 1 (66.373558ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w5j2x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-674057 describe pod metrics-server-6867b74b74-w5j2x: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
E0924 01:13:38.361638   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
E0924 01:15:43.333060   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
E0924 01:18:38.362308   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
E0924 01:18:46.413834   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
E0924 01:20:43.332579   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (227.688254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-171598" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (228.907012ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-171598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-171598 logs -n 25: (1.695067252s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-075175                              | stopped-upgrade-075175       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-619300                           | kubernetes-upgrade-619300    | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:00:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:00:40.983605   61989 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:00:40.983716   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983722   61989 out.go:358] Setting ErrFile to fd 2...
	I0924 01:00:40.983728   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983918   61989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:00:40.984500   61989 out.go:352] Setting JSON to false
	I0924 01:00:40.985412   61989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6185,"bootTime":1727133456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:00:40.985513   61989 start.go:139] virtualization: kvm guest
	I0924 01:00:40.987848   61989 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:00:40.989366   61989 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:00:40.989467   61989 notify.go:220] Checking for updates...
	I0924 01:00:40.992462   61989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:00:40.994144   61989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:00:40.995782   61989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:00:40.997503   61989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:00:40.999038   61989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:00:41.000959   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:00:41.001315   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.001388   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.017304   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0924 01:00:41.017751   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.018320   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.018355   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.018708   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.018964   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.021075   61989 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:00:41.022764   61989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:00:41.023156   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.023204   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.038764   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0924 01:00:41.039238   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.039828   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.039856   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.040272   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.040569   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.078622   61989 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:00:41.079930   61989 start.go:297] selected driver: kvm2
	I0924 01:00:41.079945   61989 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.080076   61989 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:00:41.080841   61989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.080927   61989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:00:41.096851   61989 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:00:41.097306   61989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:00:41.097345   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:00:41.097410   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:00:41.097465   61989 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.097610   61989 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.099797   61989 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 01:00:39.376584   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:41.101644   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:00:41.101691   61989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 01:00:41.101704   61989 cache.go:56] Caching tarball of preloaded images
	I0924 01:00:41.101801   61989 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:00:41.101816   61989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 01:00:41.101922   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:00:41.102126   61989 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:00:45.456606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:48.528618   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:54.608639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:57.680645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:03.760641   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:06.832676   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:12.912635   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:15.984629   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:22.064669   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:25.136609   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:31.216643   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:34.288667   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:40.368636   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:43.440700   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:49.520634   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:52.592658   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:58.672637   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:01.744679   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:07.824597   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:10.896693   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:16.976656   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:20.048675   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:26.128638   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:29.200595   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:35.280645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:38.352665   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:44.432606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:47.504721   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:53.584645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:56.656617   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:02.736686   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:05.808671   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:11.888586   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:14.960688   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:21.040639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:24.112705   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:30.192631   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:33.264655   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:36.269218   61323 start.go:364] duration metric: took 4m25.932369998s to acquireMachinesLock for "embed-certs-650507"
	I0924 01:03:36.269290   61323 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:36.269298   61323 fix.go:54] fixHost starting: 
	I0924 01:03:36.269661   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:36.269714   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:36.285429   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0924 01:03:36.285943   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:36.286516   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:03:36.286557   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:36.286885   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:36.287078   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:36.287213   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:03:36.288895   61323 fix.go:112] recreateIfNeeded on embed-certs-650507: state=Stopped err=<nil>
	I0924 01:03:36.288917   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	W0924 01:03:36.289113   61323 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:36.291435   61323 out.go:177] * Restarting existing kvm2 VM for "embed-certs-650507" ...
	I0924 01:03:36.266390   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:36.266435   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.266788   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:03:36.266816   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.267022   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:03:36.269105   61070 machine.go:96] duration metric: took 4m37.426687547s to provisionDockerMachine
	I0924 01:03:36.269142   61070 fix.go:56] duration metric: took 4m37.448766856s for fixHost
	I0924 01:03:36.269148   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 4m37.448847609s
	W0924 01:03:36.269167   61070 start.go:714] error starting host: provision: host is not running
	W0924 01:03:36.269264   61070 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 01:03:36.269274   61070 start.go:729] Will try again in 5 seconds ...
	I0924 01:03:36.293006   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Start
	I0924 01:03:36.293199   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring networks are active...
	I0924 01:03:36.294032   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network default is active
	I0924 01:03:36.294359   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network mk-embed-certs-650507 is active
	I0924 01:03:36.294718   61323 main.go:141] libmachine: (embed-certs-650507) Getting domain xml...
	I0924 01:03:36.295407   61323 main.go:141] libmachine: (embed-certs-650507) Creating domain...
	I0924 01:03:37.516049   61323 main.go:141] libmachine: (embed-certs-650507) Waiting to get IP...
	I0924 01:03:37.516959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.517374   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.517443   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.517352   62594 retry.go:31] will retry after 278.072635ms: waiting for machine to come up
	I0924 01:03:37.796796   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.797276   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.797301   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.797242   62594 retry.go:31] will retry after 387.413297ms: waiting for machine to come up
	I0924 01:03:38.185869   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.186239   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.186258   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.186193   62594 retry.go:31] will retry after 363.798568ms: waiting for machine to come up
	I0924 01:03:38.551772   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.552181   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.552221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.552122   62594 retry.go:31] will retry after 392.798012ms: waiting for machine to come up
	I0924 01:03:38.946523   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.947069   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.947097   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.947018   62594 retry.go:31] will retry after 541.413772ms: waiting for machine to come up
	I0924 01:03:39.489873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:39.490278   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:39.490307   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:39.490226   62594 retry.go:31] will retry after 804.62107ms: waiting for machine to come up
	I0924 01:03:41.271024   61070 start.go:360] acquireMachinesLock for no-preload-674057: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:03:40.296290   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:40.296775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:40.296806   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:40.296726   62594 retry.go:31] will retry after 882.018637ms: waiting for machine to come up
	I0924 01:03:41.180799   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:41.181242   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:41.181263   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:41.181197   62594 retry.go:31] will retry after 961.194045ms: waiting for machine to come up
	I0924 01:03:42.143878   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:42.144354   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:42.144379   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:42.144270   62594 retry.go:31] will retry after 1.647837023s: waiting for machine to come up
	I0924 01:03:43.793458   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:43.793892   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:43.793933   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:43.793873   62594 retry.go:31] will retry after 1.751902059s: waiting for machine to come up
	I0924 01:03:45.547905   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:45.548356   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:45.548388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:45.548313   62594 retry.go:31] will retry after 2.380106471s: waiting for machine to come up
	I0924 01:03:47.931021   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:47.931513   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:47.931537   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:47.931456   62594 retry.go:31] will retry after 2.395516641s: waiting for machine to come up
	I0924 01:03:50.328214   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:50.328766   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:50.328791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:50.328729   62594 retry.go:31] will retry after 4.41219579s: waiting for machine to come up
	I0924 01:03:54.745159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745572   61323 main.go:141] libmachine: (embed-certs-650507) Found IP for machine: 192.168.39.104
	I0924 01:03:54.745606   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has current primary IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745615   61323 main.go:141] libmachine: (embed-certs-650507) Reserving static IP address...
	I0924 01:03:54.746020   61323 main.go:141] libmachine: (embed-certs-650507) Reserved static IP address: 192.168.39.104
	I0924 01:03:54.746042   61323 main.go:141] libmachine: (embed-certs-650507) Waiting for SSH to be available...
	I0924 01:03:54.746067   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.746134   61323 main.go:141] libmachine: (embed-certs-650507) DBG | skip adding static IP to network mk-embed-certs-650507 - found existing host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"}
	I0924 01:03:54.746159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Getting to WaitForSSH function...
	I0924 01:03:54.748464   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.748871   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.748906   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.749083   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH client type: external
	I0924 01:03:54.749118   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa (-rw-------)
	I0924 01:03:54.749153   61323 main.go:141] libmachine: (embed-certs-650507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:03:54.749165   61323 main.go:141] libmachine: (embed-certs-650507) DBG | About to run SSH command:
	I0924 01:03:54.749177   61323 main.go:141] libmachine: (embed-certs-650507) DBG | exit 0
	I0924 01:03:54.872532   61323 main.go:141] libmachine: (embed-certs-650507) DBG | SSH cmd err, output: <nil>: 
	I0924 01:03:54.872869   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetConfigRaw
	I0924 01:03:54.873480   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:54.876545   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.876922   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.876953   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.877204   61323 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/config.json ...
	I0924 01:03:54.877443   61323 machine.go:93] provisionDockerMachine start ...
	I0924 01:03:54.877467   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:54.877683   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.879873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880200   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.880221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880375   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.880546   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880681   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880866   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.881002   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.881194   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.881207   61323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:03:54.984605   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:03:54.984636   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.984922   61323 buildroot.go:166] provisioning hostname "embed-certs-650507"
	I0924 01:03:54.984948   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.985185   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.988284   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988699   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.988725   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988857   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.989069   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989344   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989529   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.989731   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.989899   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.989913   61323 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650507 && echo "embed-certs-650507" | sudo tee /etc/hostname
	I0924 01:03:55.106214   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650507
	
	I0924 01:03:55.106273   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.109000   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109310   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.109334   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109498   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.109646   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109989   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.110123   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.110303   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.110318   61323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:03:55.220699   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:55.220738   61323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:03:55.220755   61323 buildroot.go:174] setting up certificates
	I0924 01:03:55.220763   61323 provision.go:84] configureAuth start
	I0924 01:03:55.220771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:55.221112   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.224166   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224603   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.224634   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.226847   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227167   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.227194   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227308   61323 provision.go:143] copyHostCerts
	I0924 01:03:55.227386   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:03:55.227409   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:03:55.227490   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:03:55.227641   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:03:55.227653   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:03:55.227695   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:03:55.227781   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:03:55.227791   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:03:55.227826   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:03:55.227909   61323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650507 san=[127.0.0.1 192.168.39.104 embed-certs-650507 localhost minikube]
	I0924 01:03:55.917061   61699 start.go:364] duration metric: took 3m46.693519233s to acquireMachinesLock for "default-k8s-diff-port-465341"
	I0924 01:03:55.917135   61699 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:55.917144   61699 fix.go:54] fixHost starting: 
	I0924 01:03:55.917553   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:55.917606   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:55.937566   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0924 01:03:55.937971   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:55.938529   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:03:55.938556   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:55.938923   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:55.939182   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:03:55.939365   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:03:55.941155   61699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-465341: state=Stopped err=<nil>
	I0924 01:03:55.941197   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	W0924 01:03:55.941417   61699 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:55.943640   61699 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-465341" ...
	I0924 01:03:55.309866   61323 provision.go:177] copyRemoteCerts
	I0924 01:03:55.309928   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:03:55.309955   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.312946   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313365   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.313388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313638   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.313889   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.314062   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.314206   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.394427   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:03:55.420595   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 01:03:55.444377   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:03:55.467261   61323 provision.go:87] duration metric: took 246.485242ms to configureAuth
	I0924 01:03:55.467302   61323 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:03:55.467483   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:03:55.467552   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.470146   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470539   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.470572   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470719   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.470961   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471101   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471299   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.471450   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.471653   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.471676   61323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:03:55.688189   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:03:55.688218   61323 machine.go:96] duration metric: took 810.761675ms to provisionDockerMachine
	I0924 01:03:55.688230   61323 start.go:293] postStartSetup for "embed-certs-650507" (driver="kvm2")
	I0924 01:03:55.688244   61323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:03:55.688266   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.688659   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:03:55.688690   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.691375   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691761   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.691791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691881   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.692105   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.692309   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.692453   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.775412   61323 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:03:55.779423   61323 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:03:55.779448   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:03:55.779536   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:03:55.779629   61323 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:03:55.779742   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:03:55.788717   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:03:55.811673   61323 start.go:296] duration metric: took 123.428914ms for postStartSetup
	I0924 01:03:55.811717   61323 fix.go:56] duration metric: took 19.542419045s for fixHost
	I0924 01:03:55.811743   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.814745   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815034   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.815062   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815247   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.815449   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815851   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.816012   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.816168   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.816178   61323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:03:55.916845   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139835.894204557
	
	I0924 01:03:55.916883   61323 fix.go:216] guest clock: 1727139835.894204557
	I0924 01:03:55.916896   61323 fix.go:229] Guest: 2024-09-24 01:03:55.894204557 +0000 UTC Remote: 2024-09-24 01:03:55.811721448 +0000 UTC m=+285.612741728 (delta=82.483109ms)
	I0924 01:03:55.916935   61323 fix.go:200] guest clock delta is within tolerance: 82.483109ms
	I0924 01:03:55.916945   61323 start.go:83] releasing machines lock for "embed-certs-650507", held for 19.6476761s
	I0924 01:03:55.916990   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.917314   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.920105   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920550   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.920583   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920832   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921327   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921510   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921578   61323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:03:55.921634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.921747   61323 ssh_runner.go:195] Run: cat /version.json
	I0924 01:03:55.921771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.924238   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924430   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924717   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924741   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924792   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924953   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925061   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925153   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925277   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925360   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925439   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925582   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.925626   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:56.005229   61323 ssh_runner.go:195] Run: systemctl --version
	I0924 01:03:56.046189   61323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:03:56.187701   61323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:03:56.193313   61323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:03:56.193379   61323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:03:56.209278   61323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:03:56.209298   61323 start.go:495] detecting cgroup driver to use...
	I0924 01:03:56.209363   61323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:03:56.226995   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:03:56.241102   61323 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:03:56.241160   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:03:56.255002   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:03:56.269805   61323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:03:56.387382   61323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:03:56.545138   61323 docker.go:233] disabling docker service ...
	I0924 01:03:56.545220   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:03:56.559017   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:03:56.571939   61323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:03:56.694139   61323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:03:56.811253   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:03:56.825480   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:03:56.842777   61323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:03:56.842830   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.852387   61323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:03:56.852447   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.862702   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.872790   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.882864   61323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:03:56.893029   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.903314   61323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.923491   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.933424   61323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:03:56.944496   61323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:03:56.944561   61323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:03:56.957077   61323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:03:56.968602   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:03:57.080955   61323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:03:57.179826   61323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:03:57.179900   61323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:03:57.184652   61323 start.go:563] Will wait 60s for crictl version
	I0924 01:03:57.184716   61323 ssh_runner.go:195] Run: which crictl
	I0924 01:03:57.190300   61323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:03:57.239310   61323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:03:57.239371   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.266833   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.301876   61323 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:03:55.945290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Start
	I0924 01:03:55.945498   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring networks are active...
	I0924 01:03:55.946346   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network default is active
	I0924 01:03:55.946726   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network mk-default-k8s-diff-port-465341 is active
	I0924 01:03:55.947152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Getting domain xml...
	I0924 01:03:55.947872   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Creating domain...
	I0924 01:03:57.236194   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting to get IP...
	I0924 01:03:57.237037   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237445   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237497   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.237413   62713 retry.go:31] will retry after 286.244795ms: waiting for machine to come up
	I0924 01:03:57.525009   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525595   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525621   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.525548   62713 retry.go:31] will retry after 273.807213ms: waiting for machine to come up
	I0924 01:03:57.801217   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801734   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801756   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.801701   62713 retry.go:31] will retry after 371.291567ms: waiting for machine to come up
	I0924 01:03:58.174283   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174746   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174781   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.174692   62713 retry.go:31] will retry after 595.157579ms: waiting for machine to come up
	I0924 01:03:58.771428   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771900   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771925   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.771862   62713 retry.go:31] will retry after 734.305784ms: waiting for machine to come up
	I0924 01:03:57.303135   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:57.306110   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306598   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:57.306624   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306783   61323 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 01:03:57.310829   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:03:57.322605   61323 kubeadm.go:883] updating cluster {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:03:57.322715   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:03:57.322761   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:03:57.358040   61323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:03:57.358104   61323 ssh_runner.go:195] Run: which lz4
	I0924 01:03:57.361948   61323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:03:57.365911   61323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:03:57.365950   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:03:58.651636   61323 crio.go:462] duration metric: took 1.289721413s to copy over tarball
	I0924 01:03:58.651708   61323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:03:59.507803   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508308   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:59.508237   62713 retry.go:31] will retry after 875.394603ms: waiting for machine to come up
	I0924 01:04:00.385279   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385713   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385748   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:00.385655   62713 retry.go:31] will retry after 885.980109ms: waiting for machine to come up
	I0924 01:04:01.273114   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273545   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273590   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:01.273535   62713 retry.go:31] will retry after 935.451975ms: waiting for machine to come up
	I0924 01:04:02.210920   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211423   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:02.211331   62713 retry.go:31] will retry after 1.254573538s: waiting for machine to come up
	I0924 01:04:03.467027   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467593   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467626   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:03.467488   62713 retry.go:31] will retry after 2.044247818s: waiting for machine to come up
	I0924 01:04:00.805580   61323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153837858s)
	I0924 01:04:00.805608   61323 crio.go:469] duration metric: took 2.153947595s to extract the tarball
	I0924 01:04:00.805617   61323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:00.846074   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:00.895803   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:00.895833   61323 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:00.895842   61323 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I0924 01:04:00.895966   61323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-650507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:00.896041   61323 ssh_runner.go:195] Run: crio config
	I0924 01:04:00.941958   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:00.941985   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:00.941998   61323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:00.942029   61323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650507 NodeName:embed-certs-650507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:00.942202   61323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650507"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:00.942292   61323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:00.952748   61323 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:00.952853   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:00.962984   61323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0924 01:04:00.980030   61323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:01.001571   61323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0924 01:04:01.018760   61323 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:01.022770   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:01.034816   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:01.157888   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:01.175883   61323 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507 for IP: 192.168.39.104
	I0924 01:04:01.175911   61323 certs.go:194] generating shared ca certs ...
	I0924 01:04:01.175937   61323 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:01.176134   61323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:01.176198   61323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:01.176211   61323 certs.go:256] generating profile certs ...
	I0924 01:04:01.176324   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/client.key
	I0924 01:04:01.176441   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key.86682f38
	I0924 01:04:01.176515   61323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key
	I0924 01:04:01.176640   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:01.176669   61323 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:01.176678   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:01.176713   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:01.176749   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:01.176778   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:01.176987   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:01.177918   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:01.221682   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:01.266005   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:01.299467   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:01.324598   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 01:04:01.349526   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:01.385589   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:01.409713   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:04:01.433745   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:01.457493   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:01.482197   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:01.505740   61323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:01.524029   61323 ssh_runner.go:195] Run: openssl version
	I0924 01:04:01.530147   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:01.541117   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545823   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545894   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.551638   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:01.562373   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:01.573502   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578561   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578634   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.584415   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:01.595312   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:01.606503   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611530   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611602   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.618484   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:01.629332   61323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:01.634238   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:01.640266   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:01.646306   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:01.652510   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:01.658237   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:01.663962   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:01.669998   61323 kubeadm.go:392] StartCluster: {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:01.670105   61323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:01.670162   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.706478   61323 cri.go:89] found id: ""
	I0924 01:04:01.706555   61323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:01.717106   61323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:01.717127   61323 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:01.717188   61323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:01.729966   61323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:01.730947   61323 kubeconfig.go:125] found "embed-certs-650507" server: "https://192.168.39.104:8443"
	I0924 01:04:01.732933   61323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:01.745538   61323 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.104
	I0924 01:04:01.745581   61323 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:01.745594   61323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:01.745649   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.783313   61323 cri.go:89] found id: ""
	I0924 01:04:01.783423   61323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:01.801432   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:01.811282   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:01.811308   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:01.811371   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:01.820717   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:01.820780   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:01.830289   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:01.839383   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:01.839449   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:01.848920   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.857986   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:01.858045   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.867465   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:01.876598   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:01.876680   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:01.886122   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:01.896245   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:02.004839   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.077983   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073104284s)
	I0924 01:04:03.078020   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.295254   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.369968   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.458283   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:03.458383   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:03.958648   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.459039   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.958614   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.994450   61323 api_server.go:72] duration metric: took 1.536167442s to wait for apiserver process to appear ...
	I0924 01:04:04.994485   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:04.994530   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:04.995139   61323 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0924 01:04:05.513732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514247   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514275   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:05.514201   62713 retry.go:31] will retry after 2.814717647s: waiting for machine to come up
	I0924 01:04:08.331550   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331964   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:08.331932   62713 retry.go:31] will retry after 2.942261445s: waiting for machine to come up
	I0924 01:04:05.495090   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:07.946057   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:07.946116   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:07.946135   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.018665   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.018711   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.018729   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.027105   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.027144   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.494630   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.500471   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.500494   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.995055   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.017236   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:09.017272   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:09.494769   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.500285   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:04:09.507440   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:09.507470   61323 api_server.go:131] duration metric: took 4.512953508s to wait for apiserver health ...
	I0924 01:04:09.507478   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:09.507485   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:09.509661   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:09.511104   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:09.529080   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:09.567695   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:09.579425   61323 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:09.579470   61323 system_pods.go:61] "coredns-7c65d6cfc9-xgs6g" [b975196f-e9e6-4e30-a49b-8d3031f73a21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:09.579489   61323 system_pods.go:61] "etcd-embed-certs-650507" [c24d7e21-08a8-42bd-9def-1808d8a58e07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:09.579501   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f1de6ed5-a87f-4d1d-8feb-d0f80851b5b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:09.579509   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [d0d454bf-b9d3-4dcb-957c-f1329e4e9e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:09.579516   61323 system_pods.go:61] "kube-proxy-qd4lg" [f06c009f-3c62-4e54-82fd-ca468fb05bbc] Running
	I0924 01:04:09.579523   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [e4931370-821e-4289-9b2b-9b46d9f8394e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:09.579532   61323 system_pods.go:61] "metrics-server-6867b74b74-pc28v" [688d7bbe-9fee-450f-aecf-bbb3413a3633] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:09.579536   61323 system_pods.go:61] "storage-provisioner" [9e354a3c-e4f1-46e1-b5fb-de8243f41c29] Running
	I0924 01:04:09.579542   61323 system_pods.go:74] duration metric: took 11.824796ms to wait for pod list to return data ...
	I0924 01:04:09.579550   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:09.584175   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:09.584203   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:09.584214   61323 node_conditions.go:105] duration metric: took 4.659859ms to run NodePressure ...
	I0924 01:04:09.584230   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:09.847130   61323 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:09.851985   61323 kubeadm.go:739] kubelet initialised
	I0924 01:04:09.852008   61323 kubeadm.go:740] duration metric: took 4.853319ms waiting for restarted kubelet to initialise ...
	I0924 01:04:09.852015   61323 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:09.857149   61323 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:11.275680   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276135   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276166   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:11.276102   62713 retry.go:31] will retry after 3.599939746s: waiting for machine to come up
	I0924 01:04:11.865712   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:13.864779   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:13.864801   61323 pod_ready.go:82] duration metric: took 4.007625744s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:13.864809   61323 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.233175   61989 start.go:364] duration metric: took 3m35.131018203s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 01:04:16.233254   61989 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:16.233262   61989 fix.go:54] fixHost starting: 
	I0924 01:04:16.233733   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:16.233787   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:16.255690   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0924 01:04:16.256135   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:16.256729   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:04:16.256763   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:16.257122   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:16.257365   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:16.257560   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 01:04:16.259055   61989 fix.go:112] recreateIfNeeded on old-k8s-version-171598: state=Stopped err=<nil>
	I0924 01:04:16.259091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	W0924 01:04:16.259266   61989 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:16.261327   61989 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	I0924 01:04:14.879977   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880533   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has current primary IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880563   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Found IP for machine: 192.168.61.186
	I0924 01:04:14.880596   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserving static IP address...
	I0924 01:04:14.881148   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.881171   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | skip adding static IP to network mk-default-k8s-diff-port-465341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"}
	I0924 01:04:14.881188   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserved static IP address: 192.168.61.186
	I0924 01:04:14.881216   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for SSH to be available...
	I0924 01:04:14.881229   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Getting to WaitForSSH function...
	I0924 01:04:14.883679   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884060   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.884083   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884214   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH client type: external
	I0924 01:04:14.884248   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa (-rw-------)
	I0924 01:04:14.884276   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:14.884287   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | About to run SSH command:
	I0924 01:04:14.884298   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | exit 0
	I0924 01:04:15.012764   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:15.013163   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetConfigRaw
	I0924 01:04:15.013983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.016664   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017173   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.017207   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017440   61699 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/config.json ...
	I0924 01:04:15.017668   61699 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:15.017687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.017915   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.020388   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.020816   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.020839   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.021074   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.021249   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021513   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021681   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.021850   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.022031   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.022041   61699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:15.132672   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:15.132706   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.132994   61699 buildroot.go:166] provisioning hostname "default-k8s-diff-port-465341"
	I0924 01:04:15.133025   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.133268   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.135929   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136371   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.136399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136578   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.136850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137008   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137193   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.137407   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.137589   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.137610   61699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-465341 && echo "default-k8s-diff-port-465341" | sudo tee /etc/hostname
	I0924 01:04:15.262142   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-465341
	
	I0924 01:04:15.262174   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.265359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265736   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.265761   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265962   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.266176   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266335   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266510   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.266705   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.266903   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.266926   61699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-465341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-465341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-465341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:15.385085   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:15.385122   61699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:15.385158   61699 buildroot.go:174] setting up certificates
	I0924 01:04:15.385174   61699 provision.go:84] configureAuth start
	I0924 01:04:15.385186   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.385556   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.388350   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388798   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.388828   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388985   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.391478   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391793   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.391823   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391952   61699 provision.go:143] copyHostCerts
	I0924 01:04:15.392016   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:15.392045   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:15.392115   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:15.392259   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:15.392272   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:15.392306   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:15.392406   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:15.392415   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:15.392440   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:15.392503   61699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-465341 san=[127.0.0.1 192.168.61.186 default-k8s-diff-port-465341 localhost minikube]
	I0924 01:04:15.572588   61699 provision.go:177] copyRemoteCerts
	I0924 01:04:15.572682   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:15.572718   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.575884   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.576401   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.576868   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.577099   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.577248   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:15.662231   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:15.686800   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 01:04:15.709860   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:04:15.738063   61699 provision.go:87] duration metric: took 352.876914ms to configureAuth
	I0924 01:04:15.738105   61699 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:15.738302   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:15.738420   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.741231   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741644   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.741693   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741835   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.742036   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742218   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.742526   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.742727   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.742754   61699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:15.986096   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:15.986128   61699 machine.go:96] duration metric: took 968.446778ms to provisionDockerMachine
	I0924 01:04:15.986143   61699 start.go:293] postStartSetup for "default-k8s-diff-port-465341" (driver="kvm2")
	I0924 01:04:15.986156   61699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:15.986183   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.986639   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:15.986674   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.989692   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990094   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.990124   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990407   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.990643   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.990826   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.990958   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.079174   61699 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:16.083139   61699 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:16.083168   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:16.083251   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:16.083363   61699 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:16.083486   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:16.094571   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:16.117327   61699 start.go:296] duration metric: took 131.16913ms for postStartSetup
	I0924 01:04:16.117364   61699 fix.go:56] duration metric: took 20.200222398s for fixHost
	I0924 01:04:16.117384   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.120507   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.120857   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.120899   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.121059   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.121325   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121511   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.121901   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:16.122100   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:16.122113   61699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:16.232986   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139856.205476339
	
	I0924 01:04:16.233013   61699 fix.go:216] guest clock: 1727139856.205476339
	I0924 01:04:16.233024   61699 fix.go:229] Guest: 2024-09-24 01:04:16.205476339 +0000 UTC Remote: 2024-09-24 01:04:16.117368802 +0000 UTC m=+247.038042336 (delta=88.107537ms)
	I0924 01:04:16.233086   61699 fix.go:200] guest clock delta is within tolerance: 88.107537ms
	I0924 01:04:16.233094   61699 start.go:83] releasing machines lock for "default-k8s-diff-port-465341", held for 20.315992151s
	I0924 01:04:16.233133   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.233491   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:16.236719   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237104   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.237134   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.237850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238019   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238116   61699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:16.238167   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.238227   61699 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:16.238260   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.241123   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241448   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241598   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241916   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.241982   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.242152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242225   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242351   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242479   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242543   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.242880   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.368841   61699 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:16.374990   61699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:16.521604   61699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:16.527198   61699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:16.527290   61699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:16.543251   61699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:16.543278   61699 start.go:495] detecting cgroup driver to use...
	I0924 01:04:16.543357   61699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:16.561775   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:16.576028   61699 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:16.576097   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:16.591757   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:16.607927   61699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:16.753944   61699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:16.917338   61699 docker.go:233] disabling docker service ...
	I0924 01:04:16.917401   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:16.935104   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:16.949717   61699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:17.088275   61699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:17.222093   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:17.236370   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:17.256277   61699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:17.256360   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.266516   61699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:17.266600   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.276647   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.288283   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.299232   61699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:17.311336   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.329416   61699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.351465   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.362248   61699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:17.372102   61699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:17.372154   61699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:17.392055   61699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:17.413641   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:17.541224   61699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:17.655205   61699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:17.655281   61699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:17.660096   61699 start.go:563] Will wait 60s for crictl version
	I0924 01:04:17.660163   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:04:17.663880   61699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:17.706878   61699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:17.706959   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.735377   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.766744   61699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:17.768253   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:17.771534   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.771952   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:17.771983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.772230   61699 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:17.776486   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:17.792599   61699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:17.792744   61699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:17.792813   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:17.831837   61699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:17.831929   61699 ssh_runner.go:195] Run: which lz4
	I0924 01:04:17.836193   61699 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:17.840562   61699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:17.840596   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:04:15.871512   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:15.871540   61323 pod_ready.go:82] duration metric: took 2.006723245s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:15.871552   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879872   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:17.879899   61323 pod_ready.go:82] duration metric: took 2.008337801s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879918   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888007   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.888041   61323 pod_ready.go:82] duration metric: took 2.008114424s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888056   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894805   61323 pod_ready.go:93] pod "kube-proxy-qd4lg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.894844   61323 pod_ready.go:82] duration metric: took 6.779022ms for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894862   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900353   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.900387   61323 pod_ready.go:82] duration metric: took 5.513733ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900401   61323 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.262929   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .Start
	I0924 01:04:16.263123   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 01:04:16.264062   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 01:04:16.264543   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 01:04:16.264954   61989 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 01:04:16.265899   61989 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 01:04:17.566157   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 01:04:17.567223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.567644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.567724   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.567625   62886 retry.go:31] will retry after 301.652575ms: waiting for machine to come up
	I0924 01:04:17.871163   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.871700   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.871729   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.871645   62886 retry.go:31] will retry after 337.632324ms: waiting for machine to come up
	I0924 01:04:18.211081   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.211954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.212013   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.211892   62886 retry.go:31] will retry after 431.70455ms: waiting for machine to come up
	I0924 01:04:18.645408   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.646017   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.646044   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.645958   62886 retry.go:31] will retry after 582.966569ms: waiting for machine to come up
	I0924 01:04:19.230457   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.230954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.230980   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.230897   62886 retry.go:31] will retry after 720.62326ms: waiting for machine to come up
	I0924 01:04:19.953023   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.953570   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.953603   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.953512   62886 retry.go:31] will retry after 688.597177ms: waiting for machine to come up
	I0924 01:04:20.644150   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:20.644636   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:20.644672   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:20.644578   62886 retry.go:31] will retry after 1.084671138s: waiting for machine to come up
	I0924 01:04:19.165501   61699 crio.go:462] duration metric: took 1.329329949s to copy over tarball
	I0924 01:04:19.165575   61699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:21.323478   61699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157877766s)
	I0924 01:04:21.323509   61699 crio.go:469] duration metric: took 2.157979404s to extract the tarball
	I0924 01:04:21.323516   61699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:21.360397   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:21.401282   61699 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:21.401309   61699 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:21.401319   61699 kubeadm.go:934] updating node { 192.168.61.186 8444 v1.31.1 crio true true} ...
	I0924 01:04:21.401441   61699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-465341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:21.401524   61699 ssh_runner.go:195] Run: crio config
	I0924 01:04:21.447706   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:21.447730   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:21.447741   61699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:21.447766   61699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.186 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-465341 NodeName:default-k8s-diff-port-465341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:21.447939   61699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-465341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:21.448022   61699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:21.457882   61699 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:21.457967   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:21.467329   61699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 01:04:21.483464   61699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:21.500880   61699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 01:04:21.517179   61699 ssh_runner.go:195] Run: grep 192.168.61.186	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:21.521032   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:21.532339   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:21.655583   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:21.671964   61699 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341 for IP: 192.168.61.186
	I0924 01:04:21.672019   61699 certs.go:194] generating shared ca certs ...
	I0924 01:04:21.672044   61699 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:21.672273   61699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:21.672390   61699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:21.672409   61699 certs.go:256] generating profile certs ...
	I0924 01:04:21.672536   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.key
	I0924 01:04:21.672629   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key.b6f5ff18
	I0924 01:04:21.672696   61699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key
	I0924 01:04:21.672940   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:21.672987   61699 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:21.672999   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:21.673029   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:21.673060   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:21.673091   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:21.673133   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:21.673884   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:21.706165   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:21.735352   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:21.763358   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:21.786284   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 01:04:21.814844   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:21.839773   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:21.866549   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:21.889901   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:21.914875   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:21.939116   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:21.963264   61699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:21.980912   61699 ssh_runner.go:195] Run: openssl version
	I0924 01:04:21.986725   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:21.998128   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002832   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002903   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.008847   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:22.019274   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:22.030110   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035920   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035996   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.043505   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:22.057224   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:22.067596   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.071957   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.072029   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.077495   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:22.087627   61699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:22.092049   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:22.097908   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:22.103716   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:22.109871   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:22.116088   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:22.121760   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:22.127473   61699 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:22.127563   61699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:22.127613   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.167951   61699 cri.go:89] found id: ""
	I0924 01:04:22.168054   61699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:22.177878   61699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:22.177898   61699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:22.177949   61699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:22.187116   61699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:22.188577   61699 kubeconfig.go:125] found "default-k8s-diff-port-465341" server: "https://192.168.61.186:8444"
	I0924 01:04:22.191744   61699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:22.200936   61699 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.186
	I0924 01:04:22.200967   61699 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:22.200979   61699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:22.201039   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.247804   61699 cri.go:89] found id: ""
	I0924 01:04:22.247888   61699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:22.263853   61699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:22.273254   61699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:22.273271   61699 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:22.273327   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 01:04:22.281724   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:22.281790   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:22.290823   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 01:04:22.299422   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:22.299482   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:22.308961   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.317922   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:22.318010   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.326980   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 01:04:22.335995   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:22.336084   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:22.345002   61699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:22.354302   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:22.462157   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.380163   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.610795   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.679134   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.747119   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:23.747191   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:21.909834   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:24.104163   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:21.730823   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:21.731385   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:21.731411   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:21.731351   62886 retry.go:31] will retry after 1.051424847s: waiting for machine to come up
	I0924 01:04:22.784644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:22.785194   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:22.785223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:22.785138   62886 retry.go:31] will retry after 1.750498954s: waiting for machine to come up
	I0924 01:04:24.537680   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:24.538085   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:24.538109   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:24.538039   62886 retry.go:31] will retry after 2.015183238s: waiting for machine to come up
	I0924 01:04:24.247859   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:24.748076   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.248220   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.747481   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.774137   61699 api_server.go:72] duration metric: took 2.027016323s to wait for apiserver process to appear ...
	I0924 01:04:25.774167   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:25.774194   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:25.774901   61699 api_server.go:269] stopped: https://192.168.61.186:8444/healthz: Get "https://192.168.61.186:8444/healthz": dial tcp 192.168.61.186:8444: connect: connection refused
	I0924 01:04:26.275226   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.290581   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.290621   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.290637   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.321353   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.321386   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.775068   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.779873   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:28.779896   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:26.408349   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:28.409816   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:26.555221   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:26.555674   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:26.555695   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:26.555634   62886 retry.go:31] will retry after 2.568414115s: waiting for machine to come up
	I0924 01:04:29.127625   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:29.128130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:29.128149   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:29.128108   62886 retry.go:31] will retry after 2.207252231s: waiting for machine to come up
	I0924 01:04:29.275326   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.284304   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.284360   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:29.774975   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.779470   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.779503   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.275137   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.279256   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.279287   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.774874   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.779081   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.779110   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.275163   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.279417   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:31.279446   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.775022   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.780092   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:04:31.787643   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:31.787672   61699 api_server.go:131] duration metric: took 6.013498176s to wait for apiserver health ...
	I0924 01:04:31.787680   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:31.787686   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:31.789733   61699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:31.791140   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:31.801441   61699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:31.819890   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:31.828128   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:31.828160   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:31.828168   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:31.828177   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:31.828186   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:31.828191   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:04:31.828196   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:31.828200   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:31.828203   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:04:31.828209   61699 system_pods.go:74] duration metric: took 8.300337ms to wait for pod list to return data ...
	I0924 01:04:31.828215   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:31.831528   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:31.831550   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:31.831561   61699 node_conditions.go:105] duration metric: took 3.341719ms to run NodePressure ...
	I0924 01:04:31.831576   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:32.101590   61699 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105656   61699 kubeadm.go:739] kubelet initialised
	I0924 01:04:32.105679   61699 kubeadm.go:740] duration metric: took 4.062709ms waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105691   61699 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:32.110237   61699 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.115057   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115090   61699 pod_ready.go:82] duration metric: took 4.825694ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.115102   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115110   61699 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.119506   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119534   61699 pod_ready.go:82] duration metric: took 4.415876ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.119546   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119558   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.124199   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124248   61699 pod_ready.go:82] duration metric: took 4.660764ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.124266   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124285   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.223553   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223596   61699 pod_ready.go:82] duration metric: took 99.284751ms for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.223606   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223613   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.622500   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622527   61699 pod_ready.go:82] duration metric: took 398.907418ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.622538   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622545   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.023370   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023430   61699 pod_ready.go:82] duration metric: took 400.874003ms for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.023458   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023472   61699 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.422810   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422841   61699 pod_ready.go:82] duration metric: took 399.35051ms for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.422851   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422859   61699 pod_ready.go:39] duration metric: took 1.317159668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:33.422874   61699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:04:33.434449   61699 ops.go:34] apiserver oom_adj: -16
	I0924 01:04:33.434473   61699 kubeadm.go:597] duration metric: took 11.256568213s to restartPrimaryControlPlane
	I0924 01:04:33.434481   61699 kubeadm.go:394] duration metric: took 11.307014166s to StartCluster
	I0924 01:04:33.434501   61699 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.434571   61699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:33.436172   61699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.436515   61699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:04:33.436732   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:33.436686   61699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:04:33.436809   61699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436815   61699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436830   61699 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-465341"
	I0924 01:04:33.436832   61699 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436864   61699 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.436877   61699 addons.go:243] addon metrics-server should already be in state true
	I0924 01:04:33.436908   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	W0924 01:04:33.436842   61699 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:04:33.436935   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.436831   61699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-465341"
	I0924 01:04:33.437322   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437370   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437377   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437412   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437458   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437483   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.438259   61699 out.go:177] * Verifying Kubernetes components...
	I0924 01:04:33.439923   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:33.453108   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0924 01:04:33.453545   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I0924 01:04:33.453608   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.453916   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.454125   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454152   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454461   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454486   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454494   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.454806   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.455065   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455111   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.455360   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455404   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.456716   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0924 01:04:33.457163   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.457688   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.457727   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.458031   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.458242   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.461814   61699 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.461835   61699 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:04:33.461864   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.462230   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.462273   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.471783   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0924 01:04:33.472043   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0924 01:04:33.472300   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472550   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472858   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.472875   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.472994   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.473003   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.473234   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473366   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473413   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.473503   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.475140   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.475553   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.477287   61699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:04:33.477293   61699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:33.478708   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:04:33.478720   61699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:04:33.478737   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478836   61699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.478863   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:04:33.478889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478971   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0924 01:04:33.479636   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.480029   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.480041   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.480396   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.482306   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.482343   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.483280   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483373   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483769   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483873   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483892   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483958   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484111   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484236   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484255   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484413   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.484472   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484738   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484866   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.519981   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0924 01:04:33.520440   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.520996   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.521028   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.521497   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.521701   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.523331   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.523576   61699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.523591   61699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:04:33.523625   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.526668   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527211   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.527244   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527471   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.527702   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.527889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.528059   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.645903   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:33.663805   61699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:33.749720   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.751631   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:04:33.751649   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:04:33.755330   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.812231   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:04:33.812257   61699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:04:33.847216   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:33.847240   61699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:04:33.932057   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:34.781871   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026510893s)
	I0924 01:04:34.781939   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.781950   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.781887   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032127769s)
	I0924 01:04:34.782009   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782023   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782293   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782309   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782318   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782326   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782361   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782369   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782375   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782389   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782404   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782629   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782643   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782645   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782673   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782683   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.790740   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.790757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.790990   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.791010   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.791013   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.871488   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871516   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.871809   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.871826   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.871834   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871841   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.872103   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.872125   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.872117   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.872136   61699 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-465341"
	I0924 01:04:34.874133   61699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:04:30.907606   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:33.406280   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:31.337368   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:31.338025   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:31.338128   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:31.338011   62886 retry.go:31] will retry after 4.137847727s: waiting for machine to come up
	I0924 01:04:35.478410   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.478991   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.479016   61989 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 01:04:35.479029   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 01:04:35.479586   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.479607   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 01:04:35.479626   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | skip adding static IP to network mk-old-k8s-version-171598 - found existing host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"}
	I0924 01:04:35.479643   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 01:04:35.479659   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 01:04:35.482028   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482377   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.482419   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 01:04:35.482550   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 01:04:35.482585   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:35.482600   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 01:04:35.482614   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 01:04:35.613364   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:35.613847   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 01:04:35.614543   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:35.617366   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.617742   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.617774   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.618068   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:04:35.618260   61989 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:35.618279   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:35.618489   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.621130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621472   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.621497   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621722   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.621914   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622354   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.622558   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.622749   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.622760   61989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:35.736637   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:35.736661   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.736943   61989 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 01:04:35.736973   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.737151   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.739921   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740304   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.740362   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740502   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.740678   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740851   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.741218   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.741409   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.741423   61989 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 01:04:35.866963   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 01:04:35.866994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.870342   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.870860   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.870893   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.871145   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.871406   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871638   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871850   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.872050   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.872253   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.872276   61989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:36.717274   61070 start.go:364] duration metric: took 55.446152288s to acquireMachinesLock for "no-preload-674057"
	I0924 01:04:36.717335   61070 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:36.717344   61070 fix.go:54] fixHost starting: 
	I0924 01:04:36.717781   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:36.717821   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:36.739062   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0924 01:04:36.739602   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:36.740307   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:04:36.740366   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:36.740767   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:36.741058   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:36.741223   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:04:36.743313   61070 fix.go:112] recreateIfNeeded on no-preload-674057: state=Stopped err=<nil>
	I0924 01:04:36.743339   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	W0924 01:04:36.743512   61070 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:36.745694   61070 out.go:177] * Restarting existing kvm2 VM for "no-preload-674057" ...
	I0924 01:04:35.998933   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:35.998962   61989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:35.998983   61989 buildroot.go:174] setting up certificates
	I0924 01:04:35.998994   61989 provision.go:84] configureAuth start
	I0924 01:04:35.999005   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.999359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.002499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003027   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.003052   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003167   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.005508   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005773   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.005796   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005909   61989 provision.go:143] copyHostCerts
	I0924 01:04:36.005967   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:36.005986   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:36.006037   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:36.006129   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:36.006137   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:36.006156   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:36.006209   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:36.006216   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:36.006237   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:36.006310   61989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 01:04:36.084609   61989 provision.go:177] copyRemoteCerts
	I0924 01:04:36.084671   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:36.084698   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.087740   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088046   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.088075   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088278   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.088523   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.088716   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.088854   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.178597   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:36.202768   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:04:36.225933   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:36.250014   61989 provision.go:87] duration metric: took 251.005829ms to configureAuth
	I0924 01:04:36.250046   61989 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:36.250369   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:04:36.250453   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.253290   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.253912   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.253943   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.254242   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.254474   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254650   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254764   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.254958   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.255124   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.255138   61989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:36.472324   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:36.472381   61989 machine.go:96] duration metric: took 854.106776ms to provisionDockerMachine
	I0924 01:04:36.472401   61989 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 01:04:36.472419   61989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:36.472451   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.472814   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:36.472849   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.475567   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.475941   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.475969   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.476125   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.476403   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.476614   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.476831   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.562688   61989 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:36.566476   61989 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:36.566501   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:36.566561   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:36.566635   61989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:36.566724   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:36.576132   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:36.599696   61989 start.go:296] duration metric: took 127.276787ms for postStartSetup
	I0924 01:04:36.599738   61989 fix.go:56] duration metric: took 20.366477202s for fixHost
	I0924 01:04:36.599763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.603462   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.603836   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.603867   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.604057   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.604500   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604721   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604878   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.605041   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.605285   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.605303   61989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:36.717061   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139876.688490589
	
	I0924 01:04:36.717091   61989 fix.go:216] guest clock: 1727139876.688490589
	I0924 01:04:36.717102   61989 fix.go:229] Guest: 2024-09-24 01:04:36.688490589 +0000 UTC Remote: 2024-09-24 01:04:36.599742488 +0000 UTC m=+235.652611441 (delta=88.748101ms)
	I0924 01:04:36.717157   61989 fix.go:200] guest clock delta is within tolerance: 88.748101ms
	I0924 01:04:36.717165   61989 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 20.483937438s
	I0924 01:04:36.717199   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.717499   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.720466   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.720959   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.720986   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.721189   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721965   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.722073   61989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:36.722118   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.722187   61989 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:36.722215   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.725171   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725384   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725669   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.725694   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725858   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.725970   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.726016   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.726065   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726249   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.726254   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.726494   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726513   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.726657   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.727049   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.845385   61989 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:36.853307   61989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:37.001850   61989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:37.009873   61989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:37.009948   61989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:37.032269   61989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:37.032299   61989 start.go:495] detecting cgroup driver to use...
	I0924 01:04:37.032403   61989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:37.056250   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:37.072827   61989 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:37.072903   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:37.090639   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:37.107525   61989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:37.235495   61989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:37.410971   61989 docker.go:233] disabling docker service ...
	I0924 01:04:37.411034   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:37.427815   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:37.444121   61989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:37.568933   61989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:37.700008   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:37.715529   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:37.736908   61989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 01:04:37.736980   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.748540   61989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:37.748590   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.759301   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.771008   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.782080   61989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:37.793756   61989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:37.803444   61989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:37.803525   61989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:37.818012   61989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:37.829019   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:37.978885   61989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:38.086263   61989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:38.086353   61989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:38.093479   61989 start.go:563] Will wait 60s for crictl version
	I0924 01:04:38.093573   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:38.097486   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:38.138781   61989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:38.138872   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.166832   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.199764   61989 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 01:04:36.747491   61070 main.go:141] libmachine: (no-preload-674057) Calling .Start
	I0924 01:04:36.747705   61070 main.go:141] libmachine: (no-preload-674057) Ensuring networks are active...
	I0924 01:04:36.748694   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network default is active
	I0924 01:04:36.749079   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network mk-no-preload-674057 is active
	I0924 01:04:36.749656   61070 main.go:141] libmachine: (no-preload-674057) Getting domain xml...
	I0924 01:04:36.750535   61070 main.go:141] libmachine: (no-preload-674057) Creating domain...
	I0924 01:04:38.122450   61070 main.go:141] libmachine: (no-preload-674057) Waiting to get IP...
	I0924 01:04:38.123578   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.124107   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.124173   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.124079   63121 retry.go:31] will retry after 227.552582ms: waiting for machine to come up
	I0924 01:04:38.353724   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.354145   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.354169   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.354102   63121 retry.go:31] will retry after 322.483933ms: waiting for machine to come up
	I0924 01:04:38.678600   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.679091   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.679120   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.679041   63121 retry.go:31] will retry after 301.71366ms: waiting for machine to come up
	I0924 01:04:34.875511   61699 addons.go:510] duration metric: took 1.43884954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:04:35.671396   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:38.169131   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:35.907681   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.408396   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.201359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:38.204699   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205122   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:38.205152   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205408   61989 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:38.209456   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:38.222128   61989 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:38.222254   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:04:38.222300   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:38.276802   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:38.276864   61989 ssh_runner.go:195] Run: which lz4
	I0924 01:04:38.280989   61989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:38.285108   61989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:38.285138   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 01:04:39.903777   61989 crio.go:462] duration metric: took 1.62282331s to copy over tarball
	I0924 01:04:39.903900   61989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:38.982586   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.983239   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.983283   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.983219   63121 retry.go:31] will retry after 402.217062ms: waiting for machine to come up
	I0924 01:04:39.386903   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:39.387550   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:39.387578   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:39.387483   63121 retry.go:31] will retry after 734.565994ms: waiting for machine to come up
	I0924 01:04:40.123444   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.123910   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.123940   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.123870   63121 retry.go:31] will retry after 704.281941ms: waiting for machine to come up
	I0924 01:04:40.829666   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.830217   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.830275   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.830209   63121 retry.go:31] will retry after 1.068502434s: waiting for machine to come up
	I0924 01:04:41.900192   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:41.900739   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:41.900765   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:41.900691   63121 retry.go:31] will retry after 1.087234201s: waiting for machine to come up
	I0924 01:04:42.989622   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:42.990089   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:42.990117   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:42.990036   63121 retry.go:31] will retry after 1.269273138s: waiting for machine to come up
	I0924 01:04:39.168613   61699 node_ready.go:49] node "default-k8s-diff-port-465341" has status "Ready":"True"
	I0924 01:04:39.168638   61699 node_ready.go:38] duration metric: took 5.504799687s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:39.168650   61699 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:39.175830   61699 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182016   61699 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.182040   61699 pod_ready.go:82] duration metric: took 6.182193ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182052   61699 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188162   61699 pod_ready.go:93] pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.188191   61699 pod_ready.go:82] duration metric: took 6.130794ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188201   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196197   61699 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.196225   61699 pod_ready.go:82] duration metric: took 8.016123ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196238   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703747   61699 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.703776   61699 pod_ready.go:82] duration metric: took 1.507528182s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703791   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771262   61699 pod_ready.go:93] pod "kube-proxy-nf8mp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.771293   61699 pod_ready.go:82] duration metric: took 67.494606ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771307   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:42.778933   61699 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:40.908876   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:43.409650   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:42.944929   61989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040984911s)
	I0924 01:04:42.944969   61989 crio.go:469] duration metric: took 3.041152253s to extract the tarball
	I0924 01:04:42.944981   61989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:42.988315   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:43.036011   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:43.036045   61989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:43.036151   61989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.036194   61989 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 01:04:43.036211   61989 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.036281   61989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.036301   61989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.036344   61989 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.036310   61989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.036577   61989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038440   61989 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.038458   61989 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 01:04:43.038482   61989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.038502   61989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.038554   61989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.038588   61989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.038600   61989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038816   61989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.306768   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.309660   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 01:04:43.312684   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.314551   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.317719   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.326063   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.378736   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.405508   61989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 01:04:43.405585   61989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.405648   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.452908   61989 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 01:04:43.452954   61989 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 01:04:43.453006   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471293   61989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 01:04:43.471341   61989 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 01:04:43.471347   61989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.471370   61989 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.471297   61989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 01:04:43.471406   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471421   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471423   61989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.471462   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.494687   61989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 01:04:43.494735   61989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.494782   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508206   61989 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 01:04:43.508253   61989 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.508278   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.508298   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508363   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.508419   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.508451   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.508487   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.508547   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.645995   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.646039   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.646098   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.646152   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.646261   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.646337   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.646413   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817326   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.817416   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.817381   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.817508   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817449   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.817597   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.817686   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.972782   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.972792   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 01:04:43.972869   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 01:04:43.972838   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 01:04:43.972928   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 01:04:43.972944   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 01:04:43.973027   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 01:04:44.008191   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 01:04:44.220628   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:44.364297   61989 cache_images.go:92] duration metric: took 1.328227964s to LoadCachedImages
	W0924 01:04:44.364505   61989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0924 01:04:44.364539   61989 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 01:04:44.364681   61989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:44.364824   61989 ssh_runner.go:195] Run: crio config
	I0924 01:04:44.423360   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:04:44.423382   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:44.423393   61989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:44.423412   61989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:04:44.423593   61989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:44.423671   61989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:04:44.434069   61989 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:44.434143   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:44.443807   61989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 01:04:44.463473   61989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:44.480449   61989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 01:04:44.498520   61989 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:44.503034   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:44.516699   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:44.643090   61989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:44.660194   61989 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 01:04:44.660216   61989 certs.go:194] generating shared ca certs ...
	I0924 01:04:44.660234   61989 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:44.660454   61989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:44.660542   61989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:44.660559   61989 certs.go:256] generating profile certs ...
	I0924 01:04:44.660682   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 01:04:44.660755   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 01:04:44.660816   61989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 01:04:44.660976   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:44.661014   61989 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:44.661026   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:44.661071   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:44.661104   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:44.661133   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:44.661211   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:44.662130   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:44.710279   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:44.736824   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:44.773120   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:44.801137   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:04:44.844946   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:44.880871   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:44.908630   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:44.947148   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:44.971925   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:45.000519   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:45.034167   61989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:45.054932   61989 ssh_runner.go:195] Run: openssl version
	I0924 01:04:45.062733   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:45.076993   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082104   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082175   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.088219   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:45.099211   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:45.111178   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116551   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116624   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.122353   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:45.133490   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:45.144123   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150437   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150498   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.157127   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:45.168217   61989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:45.172865   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:45.179177   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:45.184987   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:45.190927   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:45.197134   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:45.203170   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:45.209550   61989 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:45.209721   61989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:45.209778   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.247564   61989 cri.go:89] found id: ""
	I0924 01:04:45.247635   61989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:45.258171   61989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:45.258195   61989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:45.258269   61989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:45.268247   61989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:45.269656   61989 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:45.270486   61989 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171598" cluster setting kubeconfig missing "old-k8s-version-171598" context setting]
	I0924 01:04:45.271918   61989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:45.277260   61989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:45.287239   61989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.3
	I0924 01:04:45.287271   61989 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:45.287281   61989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:45.287325   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.327991   61989 cri.go:89] found id: ""
	I0924 01:04:45.328071   61989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:45.344693   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:45.354414   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:45.354439   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:45.354499   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:45.363765   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:45.363838   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:45.373569   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:45.382401   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:45.382464   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:45.392710   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.402855   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:45.402919   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.413651   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:45.423818   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:45.423873   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:45.434138   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:45.444119   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:45.582409   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:44.261681   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:44.262330   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:44.262360   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:44.262274   63121 retry.go:31] will retry after 1.755704993s: waiting for machine to come up
	I0924 01:04:46.019761   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:46.020213   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:46.020242   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:46.020155   63121 retry.go:31] will retry after 2.038509067s: waiting for machine to come up
	I0924 01:04:48.060649   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:48.061170   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:48.061201   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:48.061122   63121 retry.go:31] will retry after 2.834284151s: waiting for machine to come up
	I0924 01:04:45.021172   61699 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:45.021200   61699 pod_ready.go:82] duration metric: took 4.249884358s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:45.021213   61699 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:47.028860   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:45.908530   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:48.407714   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:46.245754   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.511218   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.608877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.722521   61989 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:46.722607   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.222945   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.723437   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.223704   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.723517   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.223744   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.722691   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.222927   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.723331   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.897541   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:50.898047   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:50.898093   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:50.898018   63121 retry.go:31] will retry after 4.166792416s: waiting for machine to come up
	I0924 01:04:49.530215   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.027812   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:50.907425   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.907568   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:54.908623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:51.223525   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.722715   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.223281   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.723378   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.222798   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.722883   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.223279   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.723155   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.222994   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.723628   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.068642   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069305   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has current primary IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069330   61070 main.go:141] libmachine: (no-preload-674057) Found IP for machine: 192.168.50.161
	I0924 01:04:55.069339   61070 main.go:141] libmachine: (no-preload-674057) Reserving static IP address...
	I0924 01:04:55.070035   61070 main.go:141] libmachine: (no-preload-674057) Reserved static IP address: 192.168.50.161
	I0924 01:04:55.070065   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.070073   61070 main.go:141] libmachine: (no-preload-674057) Waiting for SSH to be available...
	I0924 01:04:55.070090   61070 main.go:141] libmachine: (no-preload-674057) DBG | skip adding static IP to network mk-no-preload-674057 - found existing host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"}
	I0924 01:04:55.070095   61070 main.go:141] libmachine: (no-preload-674057) DBG | Getting to WaitForSSH function...
	I0924 01:04:55.072715   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073106   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.073140   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073351   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH client type: external
	I0924 01:04:55.073379   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa (-rw-------)
	I0924 01:04:55.073405   61070 main.go:141] libmachine: (no-preload-674057) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:55.073444   61070 main.go:141] libmachine: (no-preload-674057) DBG | About to run SSH command:
	I0924 01:04:55.073462   61070 main.go:141] libmachine: (no-preload-674057) DBG | exit 0
	I0924 01:04:55.200585   61070 main.go:141] libmachine: (no-preload-674057) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:55.200980   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetConfigRaw
	I0924 01:04:55.201650   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.204919   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205340   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.205360   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205638   61070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 01:04:55.205881   61070 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:55.205903   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:55.206124   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.208572   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209012   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.209037   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209218   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.209499   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209693   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209832   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.210010   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.210249   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.210263   61070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:55.317027   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:55.317067   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317403   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:04:55.317441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317700   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.320886   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321301   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.321330   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321443   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.321643   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.321853   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.322010   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.322169   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.322343   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.322360   61070 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-674057 && echo "no-preload-674057" | sudo tee /etc/hostname
	I0924 01:04:55.439098   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-674057
	
	I0924 01:04:55.439134   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.441909   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442212   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.442256   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442430   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.442667   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.442890   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.443078   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.443301   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.443460   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.443474   61070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-674057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-674057/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-674057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:55.558172   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:55.558204   61070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:55.558225   61070 buildroot.go:174] setting up certificates
	I0924 01:04:55.558236   61070 provision.go:84] configureAuth start
	I0924 01:04:55.558248   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.558574   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.561503   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.561891   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.561917   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.562089   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.564426   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564800   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.564825   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564958   61070 provision.go:143] copyHostCerts
	I0924 01:04:55.565009   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:55.565018   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:55.565074   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:55.565167   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:55.565175   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:55.565194   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:55.565253   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:55.565263   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:55.565285   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:55.565372   61070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.no-preload-674057 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-674057]
	I0924 01:04:55.649690   61070 provision.go:177] copyRemoteCerts
	I0924 01:04:55.649750   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:55.649774   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.652790   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653249   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.653278   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653567   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.653772   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.653936   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.654059   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:55.738522   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:55.764045   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:04:55.788225   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:55.811207   61070 provision.go:87] duration metric: took 252.958643ms to configureAuth
	I0924 01:04:55.811233   61070 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:55.811415   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:55.811503   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.814921   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815366   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.815400   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815597   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.815826   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816039   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816212   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.816496   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.816740   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.816756   61070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:56.045600   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:56.045632   61070 machine.go:96] duration metric: took 839.736907ms to provisionDockerMachine
	I0924 01:04:56.045646   61070 start.go:293] postStartSetup for "no-preload-674057" (driver="kvm2")
	I0924 01:04:56.045660   61070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:56.045679   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.045997   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:56.046027   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.049081   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049522   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.049559   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049743   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.049960   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.050105   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.050245   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.136652   61070 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:56.140894   61070 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:56.140920   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:56.140987   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:56.141071   61070 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:56.141161   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:56.151170   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:56.179268   61070 start.go:296] duration metric: took 133.605527ms for postStartSetup
	I0924 01:04:56.179318   61070 fix.go:56] duration metric: took 19.461975001s for fixHost
	I0924 01:04:56.179344   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.182567   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.182902   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.182927   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.183091   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.183320   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183562   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183720   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.183865   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:56.184036   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:56.184045   61070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:56.289079   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139896.261476318
	
	I0924 01:04:56.289113   61070 fix.go:216] guest clock: 1727139896.261476318
	I0924 01:04:56.289121   61070 fix.go:229] Guest: 2024-09-24 01:04:56.261476318 +0000 UTC Remote: 2024-09-24 01:04:56.17932382 +0000 UTC m=+357.500342999 (delta=82.152498ms)
	I0924 01:04:56.289141   61070 fix.go:200] guest clock delta is within tolerance: 82.152498ms
	I0924 01:04:56.289156   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 19.57184993s
	I0924 01:04:56.289175   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.289441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:56.292799   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293122   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.293148   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293327   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293832   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293990   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.294073   61070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:56.294108   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.294271   61070 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:56.294299   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.296962   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297113   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297300   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297325   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297473   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297504   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297526   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297665   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297737   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297858   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297926   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.297968   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.298044   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.298139   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.373014   61070 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:56.412487   61070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:56.558755   61070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:56.565187   61070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:56.565245   61070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:56.582073   61070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:56.582102   61070 start.go:495] detecting cgroup driver to use...
	I0924 01:04:56.582167   61070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:56.597553   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:56.612515   61070 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:56.612564   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:56.627596   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:56.641619   61070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:56.762636   61070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:56.917742   61070 docker.go:233] disabling docker service ...
	I0924 01:04:56.917821   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:56.934585   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:56.949194   61070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:57.085465   61070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:57.230529   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:57.245369   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:57.265137   61070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:57.265196   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.276878   61070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:57.276936   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.288934   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.300690   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.312392   61070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:57.324491   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.335619   61070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.352868   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.363280   61070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:57.372811   61070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:57.372866   61070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:57.385797   61070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:57.395936   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:57.532086   61070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:57.628275   61070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:57.628370   61070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:57.633679   61070 start.go:563] Will wait 60s for crictl version
	I0924 01:04:57.633761   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:57.637574   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:57.679667   61070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:57.679756   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.707710   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.738651   61070 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:57.740120   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:57.743379   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.743783   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:57.743814   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.744048   61070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:57.748516   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:57.762723   61070 kubeadm.go:883] updating cluster {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:57.762864   61070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:57.762906   61070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:57.798232   61070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:57.798260   61070 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:57.798334   61070 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.798357   61070 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.798377   61070 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:57.798340   61070 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.798397   61070 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.798381   61070 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.799819   61070 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.799826   61070 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.799840   61070 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799893   61070 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 01:04:57.799902   61070 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.799903   61070 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.027261   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.028437   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 01:04:58.051940   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.082860   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.088073   61070 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 01:04:58.088121   61070 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.088190   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.095081   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.098388   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.152389   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.190893   61070 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 01:04:58.190920   61070 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 01:04:58.190934   61070 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.190944   61070 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.190984   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191029   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.190988   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191080   61070 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 01:04:58.191109   61070 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.191134   61070 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 01:04:58.191144   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191157   61070 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.191185   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219642   61070 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 01:04:58.219689   61070 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.219703   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.219729   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219741   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.219745   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.250341   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.250394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.320188   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.320222   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.320308   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.320394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.383126   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.383327   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.453833   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.453918   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.453878   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.453923   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.499994   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.500027   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 01:04:58.500119   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.583372   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 01:04:58.583491   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:04:58.586213   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 01:04:58.586281   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.586325   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:04:58.586328   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 01:04:58.586405   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:04:58.616022   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 01:04:58.616061   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 01:04:58.616082   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.616118   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 01:04:58.616131   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:04:58.616180   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 01:04:58.616128   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.647507   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 01:04:58.647576   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 01:04:58.647620   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 01:04:58.647659   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:04:54.527399   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.028355   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.407381   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:59.908596   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:56.222908   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.722701   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.222762   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.722814   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.222671   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.722746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.222961   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.723335   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.223393   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.722739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.003431   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815541   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.199297236s)
	I0924 01:05:00.815566   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.167859705s)
	I0924 01:05:00.815579   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 01:05:00.815599   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 01:05:00.815619   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815625   61070 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812143064s)
	I0924 01:05:00.815674   61070 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 01:05:00.815687   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815710   61070 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815750   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:05:02.782328   61070 ssh_runner.go:235] Completed: which crictl: (1.966554191s)
	I0924 01:05:02.782392   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.966688239s)
	I0924 01:05:02.782421   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 01:05:02.782445   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782497   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782404   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:59.529167   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.531324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.028305   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:02.407051   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.475255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.222765   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.722729   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.223407   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.722799   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.223381   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.723427   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.223157   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.723069   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.223400   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.723739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.773493   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.990910382s)
	I0924 01:05:04.773540   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.99101415s)
	I0924 01:05:04.773560   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 01:05:04.773577   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:04.773584   61070 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:04.773615   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:08.061466   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.287832238s)
	I0924 01:05:08.061499   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 01:05:08.061510   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.287911454s)
	I0924 01:05:08.061595   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:08.061520   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:08.061690   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:06.029255   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.527617   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.907268   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.907464   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.223395   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.723345   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.222965   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.722795   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.222933   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.723687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.223526   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.723684   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.223275   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.723534   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.041517   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.979809714s)
	I0924 01:05:10.041549   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 01:05:10.041577   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.979956931s)
	I0924 01:05:10.041625   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 01:05:10.041582   61070 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041714   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041727   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005649   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.963906504s)
	I0924 01:05:12.005689   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 01:05:12.005696   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963951454s)
	I0924 01:05:12.005720   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 01:05:12.005727   61070 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005768   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.960728   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 01:05:12.960771   61070 cache_images.go:123] Successfully loaded all cached images
	I0924 01:05:12.960778   61070 cache_images.go:92] duration metric: took 15.162496206s to LoadCachedImages
	I0924 01:05:12.960791   61070 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.1 crio true true} ...
	I0924 01:05:12.960931   61070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-674057 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:05:12.961013   61070 ssh_runner.go:195] Run: crio config
	I0924 01:05:13.006511   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:13.006535   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:13.006551   61070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:05:13.006579   61070 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-674057 NodeName:no-preload-674057 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:05:13.006729   61070 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-674057"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:05:13.006799   61070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:05:13.017598   61070 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:05:13.017672   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:05:13.027414   61070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 01:05:13.044688   61070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:05:13.061646   61070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 01:05:13.079552   61070 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0924 01:05:13.083172   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:05:13.095232   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:05:13.207184   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:05:13.222851   61070 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057 for IP: 192.168.50.161
	I0924 01:05:13.222880   61070 certs.go:194] generating shared ca certs ...
	I0924 01:05:13.222901   61070 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:05:13.223084   61070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:05:13.223184   61070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:05:13.223195   61070 certs.go:256] generating profile certs ...
	I0924 01:05:13.223314   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.key
	I0924 01:05:13.223394   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key.8fa8fb95
	I0924 01:05:13.223445   61070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key
	I0924 01:05:13.223614   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:05:13.223654   61070 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:05:13.223710   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:05:13.223756   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:05:13.223785   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:05:13.223818   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:05:13.223862   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:05:13.224549   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:05:13.273224   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:05:13.311069   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:05:13.342314   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:05:13.369345   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:05:13.395466   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:05:13.424307   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:05:13.448531   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:05:13.472491   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:05:13.496060   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:05:13.521182   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:05:13.548194   61070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:05:13.566423   61070 ssh_runner.go:195] Run: openssl version
	I0924 01:05:13.572605   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:05:13.583991   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588705   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588771   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.594828   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:05:13.606168   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:05:13.617723   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622697   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622762   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.628486   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:05:13.639176   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:05:13.650161   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654546   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654625   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.660382   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:05:13.671487   61070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:05:13.676226   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:05:13.682591   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:05:13.688492   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:05:13.694726   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:05:13.700432   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:05:13.706080   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:05:13.712226   61070 kubeadm.go:392] StartCluster: {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:05:13.712323   61070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:05:13.712421   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:11.028779   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.527996   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:10.908227   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.408515   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:11.223272   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.723442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.223301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.723151   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.223174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.722780   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.222777   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.722987   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.223654   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.723449   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.757518   61070 cri.go:89] found id: ""
	I0924 01:05:13.757597   61070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:05:13.768318   61070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:05:13.768367   61070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:05:13.768416   61070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:05:13.778918   61070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:05:13.780385   61070 kubeconfig.go:125] found "no-preload-674057" server: "https://192.168.50.161:8443"
	I0924 01:05:13.783392   61070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:05:13.794016   61070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0924 01:05:13.794050   61070 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:05:13.794085   61070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:05:13.794150   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:13.833511   61070 cri.go:89] found id: ""
	I0924 01:05:13.833596   61070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:05:13.851608   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:05:13.861469   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:05:13.861510   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:05:13.861552   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:05:13.870700   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:05:13.870770   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:05:13.880613   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:05:13.890336   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:05:13.890404   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:05:13.900172   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.910408   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:05:13.910475   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.919980   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:05:13.929398   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:05:13.929495   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:05:13.938894   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:05:13.948749   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:14.056463   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.345268   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.288763261s)
	I0924 01:05:15.345317   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.555986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.626986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.697665   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:05:15.697761   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.198410   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.698860   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.715727   61070 api_server.go:72] duration metric: took 1.018058771s to wait for apiserver process to appear ...
	I0924 01:05:16.715756   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:05:16.715779   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:15.528157   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.528680   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:15.906930   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.907223   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:16.223623   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.723625   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.223541   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.722702   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.222919   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.722982   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.222978   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.723547   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.223112   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.723562   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.716809   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:21.716852   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:19.528769   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.028695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:20.406693   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.407036   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:24.906735   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:21.223058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.722680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.223693   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.722716   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.223387   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.722910   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.223608   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.723144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.223442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.723025   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.717768   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:26.717811   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:24.527568   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.527806   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.028455   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:27.406994   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.906590   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.723271   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.223163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.723283   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.723174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.222803   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.723029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.223679   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.723058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.718277   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:31.718317   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:31.028690   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:33.527675   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.906723   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:34.406306   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.223465   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.723438   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.223673   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.722674   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.223289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.723651   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.223014   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.723518   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.222860   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.723642   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.718676   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:36.718716   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.146737   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": read tcp 192.168.50.1:59880->192.168.50.161:8443: read: connection reset by peer
	I0924 01:05:37.215865   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.216506   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:37.716052   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.716731   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:38.216296   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:36.028537   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.032544   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.406928   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.407201   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.222680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.723015   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.222736   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.723185   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.223070   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.723237   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.223640   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.723622   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.222705   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.722909   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.217518   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:43.217557   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:40.527577   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:43.027715   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:40.906522   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:42.906906   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:44.907623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:41.223105   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.723166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.223286   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.723048   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.223278   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.723301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.222712   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.723191   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.223720   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.723044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:48.217915   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:48.217982   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:45.028780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.028883   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.406680   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:49.907776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:46.223270   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.722902   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:46.722980   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:46.781519   61989 cri.go:89] found id: ""
	I0924 01:05:46.781551   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.781565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:46.781574   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:46.781630   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:46.815990   61989 cri.go:89] found id: ""
	I0924 01:05:46.816021   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.816030   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:46.816035   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:46.816082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:46.848951   61989 cri.go:89] found id: ""
	I0924 01:05:46.848980   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.848989   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:46.848995   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:46.849062   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:46.880731   61989 cri.go:89] found id: ""
	I0924 01:05:46.880756   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.880764   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:46.880770   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:46.880832   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:46.915975   61989 cri.go:89] found id: ""
	I0924 01:05:46.916004   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.916014   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:46.916036   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:46.916105   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:46.954124   61989 cri.go:89] found id: ""
	I0924 01:05:46.954154   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.954162   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:46.954168   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:46.954233   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:46.990454   61989 cri.go:89] found id: ""
	I0924 01:05:46.990489   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.990498   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:46.990504   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:46.990573   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:47.024099   61989 cri.go:89] found id: ""
	I0924 01:05:47.024137   61989 logs.go:276] 0 containers: []
	W0924 01:05:47.024150   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:47.024161   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:47.024176   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:47.153050   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:47.153076   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:47.153109   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:47.223472   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:47.223511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:47.267699   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:47.267729   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:47.314741   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:47.314773   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:49.828972   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:49.842301   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:49.842378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:49.874632   61989 cri.go:89] found id: ""
	I0924 01:05:49.874659   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.874669   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:49.874676   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:49.874734   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:49.912500   61989 cri.go:89] found id: ""
	I0924 01:05:49.912524   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.912532   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:49.912543   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:49.912592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:49.947297   61989 cri.go:89] found id: ""
	I0924 01:05:49.947320   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.947328   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:49.947334   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:49.947395   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:49.983863   61989 cri.go:89] found id: ""
	I0924 01:05:49.983892   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.983905   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:49.983915   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:49.983977   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:50.022997   61989 cri.go:89] found id: ""
	I0924 01:05:50.023031   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.023044   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:50.023053   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:50.023109   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:50.057829   61989 cri.go:89] found id: ""
	I0924 01:05:50.057863   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.057875   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:50.057882   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:50.057929   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:50.114599   61989 cri.go:89] found id: ""
	I0924 01:05:50.114620   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.114628   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:50.114633   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:50.114677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:50.147294   61989 cri.go:89] found id: ""
	I0924 01:05:50.147326   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.147334   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:50.147345   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:50.147378   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:50.198362   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:50.198402   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:50.212381   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:50.212415   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:50.286216   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:50.286261   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:50.286279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:50.366794   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:50.366827   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:53.218617   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:53.218653   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:49.527980   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.027425   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.027780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:51.908078   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.406891   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.908167   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:52.922279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:52.922353   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:52.956677   61989 cri.go:89] found id: ""
	I0924 01:05:52.956708   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.956720   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:52.956727   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:52.956778   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:52.990933   61989 cri.go:89] found id: ""
	I0924 01:05:52.990956   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.990964   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:52.990970   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:52.991019   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:53.025729   61989 cri.go:89] found id: ""
	I0924 01:05:53.025758   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.025768   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:53.025778   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:53.025838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:53.060238   61989 cri.go:89] found id: ""
	I0924 01:05:53.060269   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.060279   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:53.060287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:53.060366   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:53.094166   61989 cri.go:89] found id: ""
	I0924 01:05:53.094200   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.094212   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:53.094220   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:53.094289   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:53.129857   61989 cri.go:89] found id: ""
	I0924 01:05:53.129884   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.129892   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:53.129898   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:53.129955   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:53.165857   61989 cri.go:89] found id: ""
	I0924 01:05:53.165890   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.165898   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:53.165909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:53.165970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:53.203884   61989 cri.go:89] found id: ""
	I0924 01:05:53.203909   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.203917   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:53.203926   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:53.203937   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:53.258001   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:53.258035   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:53.271584   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:53.271620   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:53.341791   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:53.341811   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:53.341824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:53.424126   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:53.424170   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:55.962067   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:55.977964   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:55.978042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:56.277329   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:05:56.277366   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:05:56.277385   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.302576   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.302628   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:56.715873   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.722458   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.722487   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.216714   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.224426   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:57.224474   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.715976   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.725067   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:05:57.734749   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:05:57.734782   61070 api_server.go:131] duration metric: took 41.019017744s to wait for apiserver health ...
	I0924 01:05:57.734793   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:57.734801   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:57.736798   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:05:57.738285   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:05:57.750654   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:05:57.778587   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:05:57.804858   61070 system_pods.go:59] 8 kube-system pods found
	I0924 01:05:57.804907   61070 system_pods.go:61] "coredns-7c65d6cfc9-kshwz" [4393c6ec-abd9-42ce-af67-9e8b768bd49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:05:57.804917   61070 system_pods.go:61] "etcd-no-preload-674057" [65cf3acb-8ffa-4f83-8ab9-86ddefc5d829] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:05:57.804932   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [7d26a065-faa1-4ba2-96b7-6c9b1ccb5386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:05:57.804940   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [7c5c6602-1749-4f34-bb63-08161baac6db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:05:57.804949   61070 system_pods.go:61] "kube-proxy-fgmwc" [a81419dc-54f5-4bdd-ac2d-f3f7c85b8f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 01:05:57.804955   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [d02c8d9a-1897-4506-8029-9608f11520de] Running
	I0924 01:05:57.804965   61070 system_pods.go:61] "metrics-server-6867b74b74-7gbnr" [6ffa0eb7-21d8-4741-9eae-ce7bb9604dec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:05:57.804975   61070 system_pods.go:61] "storage-provisioner" [a7f99914-8945-4614-afef-d553ea932edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 01:05:57.804984   61070 system_pods.go:74] duration metric: took 26.369156ms to wait for pod list to return data ...
	I0924 01:05:57.804996   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:05:57.809068   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:05:57.809103   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:05:57.809119   61070 node_conditions.go:105] duration metric: took 4.115654ms to run NodePressure ...
	I0924 01:05:57.809137   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:58.173276   61070 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178398   61070 kubeadm.go:739] kubelet initialised
	I0924 01:05:58.178422   61070 kubeadm.go:740] duration metric: took 5.118555ms waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178429   61070 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:05:58.183646   61070 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:05:56.029030   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.029256   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.407889   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.907744   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.014681   61989 cri.go:89] found id: ""
	I0924 01:05:56.014716   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.014728   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:56.014736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:56.014799   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:56.062547   61989 cri.go:89] found id: ""
	I0924 01:05:56.062576   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.062587   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:56.062606   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:56.062665   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:56.100938   61989 cri.go:89] found id: ""
	I0924 01:05:56.100960   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.100969   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:56.100974   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:56.101039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:56.137694   61989 cri.go:89] found id: ""
	I0924 01:05:56.137722   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.137737   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:56.137744   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:56.137803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:56.174876   61989 cri.go:89] found id: ""
	I0924 01:05:56.174911   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.174923   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:56.174931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:56.174990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:56.208870   61989 cri.go:89] found id: ""
	I0924 01:05:56.208895   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.208905   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:56.208913   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:56.208971   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:56.242476   61989 cri.go:89] found id: ""
	I0924 01:05:56.242508   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.242520   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:56.242528   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:56.242590   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:56.276185   61989 cri.go:89] found id: ""
	I0924 01:05:56.276214   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.276255   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:56.276267   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:56.276284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:56.332755   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:56.332792   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:56.346279   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:56.346312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:56.419725   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:56.419751   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:56.419766   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:56.500173   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:56.500208   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.083761   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:59.097184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:59.097247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:59.131734   61989 cri.go:89] found id: ""
	I0924 01:05:59.131764   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.131775   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:59.131782   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:59.131842   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:59.169402   61989 cri.go:89] found id: ""
	I0924 01:05:59.169429   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.169439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:59.169446   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:59.169521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:59.208235   61989 cri.go:89] found id: ""
	I0924 01:05:59.208260   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.208290   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:59.208298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:59.208372   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:59.242314   61989 cri.go:89] found id: ""
	I0924 01:05:59.242345   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.242358   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:59.242367   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:59.242433   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:59.281300   61989 cri.go:89] found id: ""
	I0924 01:05:59.281327   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.281337   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:59.281344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:59.281407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:59.315336   61989 cri.go:89] found id: ""
	I0924 01:05:59.315369   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.315377   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:59.315386   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:59.315445   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:59.347678   61989 cri.go:89] found id: ""
	I0924 01:05:59.347708   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.347718   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:59.347726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:59.347786   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:59.381296   61989 cri.go:89] found id: ""
	I0924 01:05:59.381328   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.381340   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:59.381352   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:59.381369   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:59.462939   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:59.462971   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:59.462990   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:59.544967   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:59.545004   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.585079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:59.585106   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:59.637897   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:59.637940   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:00.190924   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.192627   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.192648   61070 pod_ready.go:82] duration metric: took 4.008971718s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.192658   61070 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198586   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.198614   61070 pod_ready.go:82] duration metric: took 5.949433ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198627   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205306   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:03.205331   61070 pod_ready.go:82] duration metric: took 1.006696778s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205342   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:00.528770   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.529473   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:01.406620   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:03.407024   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.153289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:02.170582   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:02.170679   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:02.216700   61989 cri.go:89] found id: ""
	I0924 01:06:02.216722   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.216730   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:02.216736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:02.216793   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:02.292664   61989 cri.go:89] found id: ""
	I0924 01:06:02.292695   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.292706   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:02.292714   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:02.292780   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:02.349447   61989 cri.go:89] found id: ""
	I0924 01:06:02.349470   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.349481   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:02.349487   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:02.349557   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:02.390491   61989 cri.go:89] found id: ""
	I0924 01:06:02.390514   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.390535   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:02.390543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:02.390597   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:02.439330   61989 cri.go:89] found id: ""
	I0924 01:06:02.439355   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.439366   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:02.439373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:02.439432   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:02.476400   61989 cri.go:89] found id: ""
	I0924 01:06:02.476431   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.476439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:02.476445   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:02.476501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:02.511946   61989 cri.go:89] found id: ""
	I0924 01:06:02.511975   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.511983   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:02.511989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:02.512036   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:02.547526   61989 cri.go:89] found id: ""
	I0924 01:06:02.547554   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.547561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:02.547570   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:02.547580   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:02.619784   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:02.619805   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:02.619816   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:02.698597   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:02.698636   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:02.741381   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:02.741419   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:02.797965   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:02.798023   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.312059   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:05.326556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:05.326614   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:05.360973   61989 cri.go:89] found id: ""
	I0924 01:06:05.360999   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.361011   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:05.361018   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:05.361101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:05.394720   61989 cri.go:89] found id: ""
	I0924 01:06:05.394750   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.394760   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:05.394767   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:05.394831   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:05.432564   61989 cri.go:89] found id: ""
	I0924 01:06:05.432592   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.432603   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:05.432611   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:05.432673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:05.465424   61989 cri.go:89] found id: ""
	I0924 01:06:05.465467   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.465478   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:05.465484   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:05.465555   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:05.503656   61989 cri.go:89] found id: ""
	I0924 01:06:05.503684   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.503693   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:05.503699   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:05.503752   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:05.538128   61989 cri.go:89] found id: ""
	I0924 01:06:05.538160   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.538171   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:05.538179   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:05.538248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:05.571310   61989 cri.go:89] found id: ""
	I0924 01:06:05.571336   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.571346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:05.571353   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:05.571416   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:05.604038   61989 cri.go:89] found id: ""
	I0924 01:06:05.604062   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.604070   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:05.604079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:05.604094   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:05.657025   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:05.657068   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.671457   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:05.671483   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:05.747671   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:05.747701   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:05.747718   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:05.833248   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:05.833285   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:05.212622   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.711612   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.028130   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.527525   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.407057   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.407341   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.906549   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:08.372029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:08.386497   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:08.386564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:08.422998   61989 cri.go:89] found id: ""
	I0924 01:06:08.423029   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.423039   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:08.423047   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:08.423095   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:08.457009   61989 cri.go:89] found id: ""
	I0924 01:06:08.457037   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.457047   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:08.457052   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:08.457104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:08.489694   61989 cri.go:89] found id: ""
	I0924 01:06:08.489728   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.489740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:08.489750   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:08.489804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:08.521819   61989 cri.go:89] found id: ""
	I0924 01:06:08.521845   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.521856   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:08.521864   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:08.521922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:08.556422   61989 cri.go:89] found id: ""
	I0924 01:06:08.556453   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.556465   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:08.556472   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:08.556567   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:08.593802   61989 cri.go:89] found id: ""
	I0924 01:06:08.593828   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.593836   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:08.593842   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:08.593932   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:08.627569   61989 cri.go:89] found id: ""
	I0924 01:06:08.627592   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.627600   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:08.627605   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:08.627653   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:08.664728   61989 cri.go:89] found id: ""
	I0924 01:06:08.664758   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.664769   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:08.664780   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:08.664794   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.703546   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:08.703577   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:08.755612   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:08.755649   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:08.769957   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:08.769989   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:08.842732   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:08.842762   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:08.842789   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:10.211942   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.211973   61070 pod_ready.go:82] duration metric: took 7.006623705s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.211986   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217219   61070 pod_ready.go:93] pod "kube-proxy-fgmwc" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.217247   61070 pod_ready.go:82] duration metric: took 5.254551ms for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217260   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221959   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.221983   61070 pod_ready.go:82] duration metric: took 4.71607ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221996   61070 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:12.227911   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.527831   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.527917   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.028599   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.907394   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.407242   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.427424   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:11.440709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:11.440773   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:11.475537   61989 cri.go:89] found id: ""
	I0924 01:06:11.475564   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.475572   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:11.475577   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:11.475633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:11.512231   61989 cri.go:89] found id: ""
	I0924 01:06:11.512276   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.512285   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:11.512292   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:11.512365   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:11.549809   61989 cri.go:89] found id: ""
	I0924 01:06:11.549840   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.549852   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:11.549858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:11.549924   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:11.587451   61989 cri.go:89] found id: ""
	I0924 01:06:11.587481   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.587493   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:11.587500   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:11.587558   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:11.625109   61989 cri.go:89] found id: ""
	I0924 01:06:11.625135   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.625146   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:11.625154   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:11.625213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:11.660577   61989 cri.go:89] found id: ""
	I0924 01:06:11.660604   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.660616   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:11.660624   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:11.660683   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:11.703527   61989 cri.go:89] found id: ""
	I0924 01:06:11.703557   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.703569   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:11.703577   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:11.703646   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:11.740766   61989 cri.go:89] found id: ""
	I0924 01:06:11.740798   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.740810   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:11.740820   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:11.740836   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:11.803402   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:11.803448   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:11.819144   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:11.819178   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:11.896152   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:11.896173   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:11.896187   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.986284   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:11.986340   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.523669   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:14.537923   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:14.537990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:14.576092   61989 cri.go:89] found id: ""
	I0924 01:06:14.576128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.576140   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:14.576148   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:14.576213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:14.611985   61989 cri.go:89] found id: ""
	I0924 01:06:14.612020   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.612032   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:14.612039   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:14.612098   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:14.647640   61989 cri.go:89] found id: ""
	I0924 01:06:14.647667   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.647675   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:14.647682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:14.647746   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:14.685089   61989 cri.go:89] found id: ""
	I0924 01:06:14.685128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.685141   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:14.685150   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:14.685217   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:14.718694   61989 cri.go:89] found id: ""
	I0924 01:06:14.718729   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.718738   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:14.718745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:14.718810   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:14.754874   61989 cri.go:89] found id: ""
	I0924 01:06:14.754916   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.754928   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:14.754936   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:14.754993   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:14.789580   61989 cri.go:89] found id: ""
	I0924 01:06:14.789608   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.789617   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:14.789625   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:14.789677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:14.823173   61989 cri.go:89] found id: ""
	I0924 01:06:14.823201   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.823213   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:14.823224   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:14.823238   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:14.878398   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:14.878431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:14.892466   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:14.892502   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:14.965978   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:14.966010   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:14.966065   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:15.050557   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:15.050600   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.231644   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.728219   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.029325   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:18.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.907014   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:19.406893   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:17.596915   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:17.609585   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:17.609643   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:17.648275   61989 cri.go:89] found id: ""
	I0924 01:06:17.648305   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.648313   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:17.648319   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:17.648447   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:17.681447   61989 cri.go:89] found id: ""
	I0924 01:06:17.681473   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.681484   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:17.681491   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:17.681552   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:17.719202   61989 cri.go:89] found id: ""
	I0924 01:06:17.719226   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.719234   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:17.719240   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:17.719296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:17.752601   61989 cri.go:89] found id: ""
	I0924 01:06:17.752629   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.752641   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:17.752649   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:17.752700   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:17.789905   61989 cri.go:89] found id: ""
	I0924 01:06:17.789934   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.789945   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:17.789952   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:17.790015   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:17.824174   61989 cri.go:89] found id: ""
	I0924 01:06:17.824205   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.824217   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:17.824237   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:17.824296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:17.860647   61989 cri.go:89] found id: ""
	I0924 01:06:17.860674   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.860684   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:17.860691   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:17.860750   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:17.896392   61989 cri.go:89] found id: ""
	I0924 01:06:17.896414   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.896423   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:17.896437   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:17.896450   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:17.949230   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:17.949272   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:17.963125   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:17.963183   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:18.035092   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:18.035117   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:18.035134   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:18.117973   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:18.118011   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:20.657044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:20.669862   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:20.669936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:20.704672   61989 cri.go:89] found id: ""
	I0924 01:06:20.704703   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.704714   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:20.704722   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:20.704785   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:20.745777   61989 cri.go:89] found id: ""
	I0924 01:06:20.745801   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.745811   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:20.745818   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:20.745879   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:20.779673   61989 cri.go:89] found id: ""
	I0924 01:06:20.779704   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.779740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:20.779749   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:20.779809   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:20.815959   61989 cri.go:89] found id: ""
	I0924 01:06:20.815983   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.815992   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:20.815998   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:20.816055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:20.849203   61989 cri.go:89] found id: ""
	I0924 01:06:20.849232   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.849243   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:20.849251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:20.849319   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:20.884303   61989 cri.go:89] found id: ""
	I0924 01:06:20.884353   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.884365   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:20.884373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:20.884436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:20.921217   61989 cri.go:89] found id: ""
	I0924 01:06:20.921242   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.921249   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:20.921255   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:20.921302   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:20.957555   61989 cri.go:89] found id: ""
	I0924 01:06:20.957590   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.957601   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:20.957613   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:20.957628   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:20.972591   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:20.972630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:06:18.728553   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.730046   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.228040   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.527573   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:22.527695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:21.406963   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.907730   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	W0924 01:06:21.046506   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:21.046532   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:21.046547   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:21.129415   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:21.129453   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:21.168899   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:21.168924   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:23.720925   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:23.736893   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:23.736965   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:23.771874   61989 cri.go:89] found id: ""
	I0924 01:06:23.771901   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.771909   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:23.771915   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:23.771976   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:23.806892   61989 cri.go:89] found id: ""
	I0924 01:06:23.806924   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.806936   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:23.806943   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:23.806999   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:23.843661   61989 cri.go:89] found id: ""
	I0924 01:06:23.843686   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.843694   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:23.843700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:23.843753   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:23.878979   61989 cri.go:89] found id: ""
	I0924 01:06:23.879007   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.879019   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:23.879027   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:23.879086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:23.913893   61989 cri.go:89] found id: ""
	I0924 01:06:23.913916   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.913925   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:23.913937   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:23.913982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:23.947932   61989 cri.go:89] found id: ""
	I0924 01:06:23.947961   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.947972   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:23.947980   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:23.948045   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:23.981366   61989 cri.go:89] found id: ""
	I0924 01:06:23.981391   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.981402   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:23.981409   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:23.981467   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:24.014428   61989 cri.go:89] found id: ""
	I0924 01:06:24.014455   61989 logs.go:276] 0 containers: []
	W0924 01:06:24.014463   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:24.014471   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:24.014485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:24.029585   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:24.029621   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:24.095926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:24.095955   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:24.095975   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:24.174594   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:24.174635   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:24.213286   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:24.213311   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:25.229785   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.729021   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:25.027783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.030450   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.406776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:28.907135   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.764740   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:26.777184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:26.777279   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:26.812704   61989 cri.go:89] found id: ""
	I0924 01:06:26.812735   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.812746   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:26.812753   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:26.812811   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:26.849867   61989 cri.go:89] found id: ""
	I0924 01:06:26.849895   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.849904   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:26.849909   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:26.849958   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:26.882856   61989 cri.go:89] found id: ""
	I0924 01:06:26.882878   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.882885   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:26.882891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:26.882936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:26.921063   61989 cri.go:89] found id: ""
	I0924 01:06:26.921085   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.921094   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:26.921100   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:26.921156   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:26.961154   61989 cri.go:89] found id: ""
	I0924 01:06:26.961182   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.961194   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:26.961200   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:26.961257   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:26.994560   61989 cri.go:89] found id: ""
	I0924 01:06:26.994593   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.994603   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:26.994612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:26.994673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:27.027967   61989 cri.go:89] found id: ""
	I0924 01:06:27.028013   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.028026   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:27.028033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:27.028096   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:27.063099   61989 cri.go:89] found id: ""
	I0924 01:06:27.063130   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.063142   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:27.063153   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:27.063166   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:27.116237   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:27.116279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:27.130785   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:27.130815   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:27.201931   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:27.201954   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:27.201970   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:27.282182   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:27.282217   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:29.825403   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:29.838890   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:29.838989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:29.873651   61989 cri.go:89] found id: ""
	I0924 01:06:29.873678   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.873690   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:29.873698   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:29.873758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:29.909894   61989 cri.go:89] found id: ""
	I0924 01:06:29.909916   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.909923   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:29.909929   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:29.909978   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:29.944850   61989 cri.go:89] found id: ""
	I0924 01:06:29.944878   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.944886   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:29.944892   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:29.944945   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:29.981486   61989 cri.go:89] found id: ""
	I0924 01:06:29.981515   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.981524   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:29.981532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:29.981592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:30.015138   61989 cri.go:89] found id: ""
	I0924 01:06:30.015165   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.015176   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:30.015184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:30.015256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:30.051777   61989 cri.go:89] found id: ""
	I0924 01:06:30.051814   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.051825   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:30.051834   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:30.051898   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:30.085573   61989 cri.go:89] found id: ""
	I0924 01:06:30.085598   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.085607   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:30.085612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:30.085661   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:30.122518   61989 cri.go:89] found id: ""
	I0924 01:06:30.122551   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.122561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:30.122570   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:30.122585   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:30.199075   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:30.199118   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:30.238259   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:30.238293   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:30.292145   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:30.292185   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:30.306404   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:30.306431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:30.373959   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:29.729379   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.228691   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:29.527089   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:31.527523   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:34.027357   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:30.907575   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:33.407615   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.875041   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:32.888358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:32.888435   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:32.924466   61989 cri.go:89] found id: ""
	I0924 01:06:32.924499   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.924519   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:32.924528   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:32.924584   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:32.960188   61989 cri.go:89] found id: ""
	I0924 01:06:32.960216   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.960224   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:32.960231   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:32.960282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:32.997612   61989 cri.go:89] found id: ""
	I0924 01:06:32.997641   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.997649   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:32.997655   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:32.997704   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:33.034282   61989 cri.go:89] found id: ""
	I0924 01:06:33.034310   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.034317   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:33.034325   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:33.034381   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:33.073832   61989 cri.go:89] found id: ""
	I0924 01:06:33.073861   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.073870   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:33.073875   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:33.073959   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:33.107276   61989 cri.go:89] found id: ""
	I0924 01:06:33.107303   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.107314   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:33.107323   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:33.107373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:33.141062   61989 cri.go:89] found id: ""
	I0924 01:06:33.141091   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.141104   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:33.141112   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:33.141174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:33.177874   61989 cri.go:89] found id: ""
	I0924 01:06:33.177899   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.177908   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:33.177916   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:33.177927   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:33.228324   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:33.228373   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:33.241324   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:33.241350   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:33.313115   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:33.313139   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:33.313151   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:33.392458   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:33.392512   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:35.932822   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:35.945918   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:35.945987   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:34.727948   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.728560   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.028536   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:38.527308   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.906501   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:37.907165   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.984400   61989 cri.go:89] found id: ""
	I0924 01:06:35.984438   61989 logs.go:276] 0 containers: []
	W0924 01:06:35.984448   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:35.984456   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:35.984528   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:36.022208   61989 cri.go:89] found id: ""
	I0924 01:06:36.022235   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.022244   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:36.022252   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:36.022336   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:36.059153   61989 cri.go:89] found id: ""
	I0924 01:06:36.059176   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.059184   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:36.059190   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:36.059247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:36.094375   61989 cri.go:89] found id: ""
	I0924 01:06:36.094413   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.094425   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:36.094434   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:36.094490   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:36.128662   61989 cri.go:89] found id: ""
	I0924 01:06:36.128691   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.128702   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:36.128710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:36.128762   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:36.160898   61989 cri.go:89] found id: ""
	I0924 01:06:36.160925   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.160937   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:36.160945   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:36.161010   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:36.194421   61989 cri.go:89] found id: ""
	I0924 01:06:36.194448   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.194460   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:36.194468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:36.194537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:36.230448   61989 cri.go:89] found id: ""
	I0924 01:06:36.230477   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.230487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:36.230498   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:36.230511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:36.303029   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:36.303053   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:36.303067   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:36.406305   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:36.406338   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:36.444044   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:36.444084   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:36.494829   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:36.494873   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.009579   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:39.023867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:39.023943   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:39.057426   61989 cri.go:89] found id: ""
	I0924 01:06:39.057458   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.057469   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:39.057477   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:39.057539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:39.091421   61989 cri.go:89] found id: ""
	I0924 01:06:39.091444   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.091453   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:39.091459   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:39.091518   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:39.125407   61989 cri.go:89] found id: ""
	I0924 01:06:39.125437   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.125448   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:39.125455   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:39.125525   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:39.157146   61989 cri.go:89] found id: ""
	I0924 01:06:39.157170   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.157181   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:39.157189   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:39.157248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:39.189474   61989 cri.go:89] found id: ""
	I0924 01:06:39.189501   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.189511   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:39.189518   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:39.189577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:39.228034   61989 cri.go:89] found id: ""
	I0924 01:06:39.228063   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.228084   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:39.228099   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:39.228158   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:39.268289   61989 cri.go:89] found id: ""
	I0924 01:06:39.268317   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.268345   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:39.268354   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:39.268431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:39.304964   61989 cri.go:89] found id: ""
	I0924 01:06:39.304988   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.304996   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:39.305005   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:39.305017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:39.356193   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:39.356234   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.370782   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:39.370807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:39.442395   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:39.442418   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:39.442429   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:39.518426   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:39.518466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:38.729606   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:41.228528   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.528236   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:43.028285   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.407021   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.906884   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:44.907822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.059895   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:42.092776   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:42.092837   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:42.128508   61989 cri.go:89] found id: ""
	I0924 01:06:42.128534   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.128555   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:42.128565   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:42.128623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:42.160961   61989 cri.go:89] found id: ""
	I0924 01:06:42.160989   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.161000   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:42.161008   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:42.161072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:42.194212   61989 cri.go:89] found id: ""
	I0924 01:06:42.194260   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.194272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:42.194280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:42.194342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:42.229284   61989 cri.go:89] found id: ""
	I0924 01:06:42.229312   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.229323   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:42.229331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:42.229378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:42.261952   61989 cri.go:89] found id: ""
	I0924 01:06:42.261986   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.261997   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:42.262010   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:42.262059   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:42.297096   61989 cri.go:89] found id: ""
	I0924 01:06:42.297125   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.297133   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:42.297139   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:42.297185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:42.333066   61989 cri.go:89] found id: ""
	I0924 01:06:42.333095   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.333106   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:42.333114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:42.333176   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:42.366798   61989 cri.go:89] found id: ""
	I0924 01:06:42.366829   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.366840   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:42.366852   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:42.366865   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:42.419424   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:42.419466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:42.433814   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:42.433846   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:42.503817   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:42.503845   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:42.503860   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:42.583249   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:42.583289   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:45.123746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:45.136292   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:45.136377   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:45.174390   61989 cri.go:89] found id: ""
	I0924 01:06:45.174420   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.174441   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:45.174449   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:45.174539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:45.212394   61989 cri.go:89] found id: ""
	I0924 01:06:45.212422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.212433   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:45.212441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:45.212503   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:45.245831   61989 cri.go:89] found id: ""
	I0924 01:06:45.245853   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.245861   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:45.245867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:45.245922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:45.277587   61989 cri.go:89] found id: ""
	I0924 01:06:45.277615   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.277626   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:45.277634   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:45.277692   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:45.309715   61989 cri.go:89] found id: ""
	I0924 01:06:45.309749   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.309760   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:45.309768   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:45.309827   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:45.342799   61989 cri.go:89] found id: ""
	I0924 01:06:45.342831   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.342844   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:45.342853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:45.342921   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:45.375377   61989 cri.go:89] found id: ""
	I0924 01:06:45.375404   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.375415   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:45.375423   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:45.375484   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:45.415395   61989 cri.go:89] found id: ""
	I0924 01:06:45.415422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.415432   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:45.415444   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:45.415459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:45.464381   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:45.464416   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:45.478142   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:45.478168   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:45.551211   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:45.551234   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:45.551244   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:45.635255   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:45.635297   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:43.728645   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:46.227611   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.228320   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:45.028650   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.528968   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.406822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:49.407790   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.173687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:48.186635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:48.186710   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:48.219544   61989 cri.go:89] found id: ""
	I0924 01:06:48.219566   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.219574   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:48.219583   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:48.219654   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:48.253594   61989 cri.go:89] found id: ""
	I0924 01:06:48.253618   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.253627   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:48.253634   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:48.253693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:48.287991   61989 cri.go:89] found id: ""
	I0924 01:06:48.288019   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.288031   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:48.288041   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:48.288100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:48.320738   61989 cri.go:89] found id: ""
	I0924 01:06:48.320767   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.320779   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:48.320787   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:48.320847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:48.352197   61989 cri.go:89] found id: ""
	I0924 01:06:48.352225   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.352233   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:48.352243   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:48.352317   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:48.386157   61989 cri.go:89] found id: ""
	I0924 01:06:48.386187   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.386195   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:48.386202   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:48.386250   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:48.422372   61989 cri.go:89] found id: ""
	I0924 01:06:48.422398   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.422407   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:48.422413   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:48.422463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:48.464007   61989 cri.go:89] found id: ""
	I0924 01:06:48.464032   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.464043   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:48.464054   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:48.464072   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.520533   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:48.520570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:48.594453   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:48.594489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:48.607309   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:48.607336   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:48.674078   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:48.674102   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:48.674117   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:50.740093   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.228567   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:50.028640   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:52.527656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.906378   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.906887   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.256855   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:51.270305   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:51.270378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:51.303450   61989 cri.go:89] found id: ""
	I0924 01:06:51.303487   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.303499   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:51.303508   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:51.303564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:51.336959   61989 cri.go:89] found id: ""
	I0924 01:06:51.336987   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.337003   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:51.337010   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:51.337072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:51.369210   61989 cri.go:89] found id: ""
	I0924 01:06:51.369239   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.369249   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:51.369260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:51.369339   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:51.403595   61989 cri.go:89] found id: ""
	I0924 01:06:51.403645   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.403658   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:51.403666   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:51.403723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:51.445459   61989 cri.go:89] found id: ""
	I0924 01:06:51.445493   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.445503   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:51.445510   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:51.445574   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:51.477615   61989 cri.go:89] found id: ""
	I0924 01:06:51.477642   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.477653   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:51.477660   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:51.477722   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:51.509737   61989 cri.go:89] found id: ""
	I0924 01:06:51.509766   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.509784   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:51.509792   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:51.509856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:51.546451   61989 cri.go:89] found id: ""
	I0924 01:06:51.546479   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.546489   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:51.546501   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:51.546515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:51.600277   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:51.600315   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:51.613403   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:51.613434   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:51.691645   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:51.691669   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:51.691688   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.772276   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:51.772312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:54.313491   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:54.328265   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:54.328374   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:54.368091   61989 cri.go:89] found id: ""
	I0924 01:06:54.368117   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.368126   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:54.368131   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:54.368183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:54.408272   61989 cri.go:89] found id: ""
	I0924 01:06:54.408300   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.408310   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:54.408318   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:54.408409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:54.460467   61989 cri.go:89] found id: ""
	I0924 01:06:54.460489   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.460499   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:54.460506   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:54.460564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:54.493310   61989 cri.go:89] found id: ""
	I0924 01:06:54.493334   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.493343   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:54.493349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:54.493401   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:54.526772   61989 cri.go:89] found id: ""
	I0924 01:06:54.526799   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.526809   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:54.526817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:54.526880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:54.562235   61989 cri.go:89] found id: ""
	I0924 01:06:54.562264   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.562274   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:54.562283   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:54.562345   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:54.597755   61989 cri.go:89] found id: ""
	I0924 01:06:54.597784   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.597794   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:54.597803   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:54.597851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:54.632225   61989 cri.go:89] found id: ""
	I0924 01:06:54.632282   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.632295   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:54.632305   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:54.632321   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:54.683849   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:54.683887   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:54.697395   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:54.697425   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:54.767577   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:54.767598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:54.767609   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:54.842619   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:54.842655   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:55.728756   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:58.228520   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:54.528783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.028039   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:59.028234   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:55.907673   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.907858   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.381394   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:57.394078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:57.394147   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:57.431241   61989 cri.go:89] found id: ""
	I0924 01:06:57.431266   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.431278   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:57.431284   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:57.431352   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:57.468954   61989 cri.go:89] found id: ""
	I0924 01:06:57.468983   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.468994   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:57.469001   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:57.469060   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:57.503518   61989 cri.go:89] found id: ""
	I0924 01:06:57.503550   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.503562   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:57.503570   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:57.503618   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:57.540432   61989 cri.go:89] found id: ""
	I0924 01:06:57.540464   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.540475   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:57.540483   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:57.540548   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:57.574142   61989 cri.go:89] found id: ""
	I0924 01:06:57.574175   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.574187   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:57.574195   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:57.574264   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:57.608505   61989 cri.go:89] found id: ""
	I0924 01:06:57.608528   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.608537   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:57.608543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:57.608589   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:57.644273   61989 cri.go:89] found id: ""
	I0924 01:06:57.644305   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.644317   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:57.644344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:57.644409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:57.682023   61989 cri.go:89] found id: ""
	I0924 01:06:57.682050   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.682060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:57.682072   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:57.682086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:57.732537   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:57.732570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:57.746632   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:57.746663   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:57.813904   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:57.813927   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:57.813947   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:57.891947   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:57.891992   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.432035   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:00.444886   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:00.444966   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:00.482653   61989 cri.go:89] found id: ""
	I0924 01:07:00.482683   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.482694   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:00.482702   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:00.482754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:00.516404   61989 cri.go:89] found id: ""
	I0924 01:07:00.516441   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.516452   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:00.516463   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:00.516527   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:00.552938   61989 cri.go:89] found id: ""
	I0924 01:07:00.552963   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.552971   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:00.552977   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:00.553043   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:00.589143   61989 cri.go:89] found id: ""
	I0924 01:07:00.589170   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.589178   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:00.589184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:00.589235   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:00.625023   61989 cri.go:89] found id: ""
	I0924 01:07:00.625047   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.625059   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:00.625066   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:00.625127   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:00.662904   61989 cri.go:89] found id: ""
	I0924 01:07:00.662936   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.662948   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:00.662959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:00.663022   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:00.702892   61989 cri.go:89] found id: ""
	I0924 01:07:00.702921   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.702932   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:00.702938   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:00.702988   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:00.737010   61989 cri.go:89] found id: ""
	I0924 01:07:00.737039   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.737050   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:00.737061   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:00.737075   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:00.788093   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:00.788132   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:00.801354   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:00.801382   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:00.866830   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:00.866862   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:00.866878   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:00.950034   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:00.950076   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.728279   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.227980   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:01.527849   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.027729   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:00.406445   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:02.407048   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.907569   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.492773   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:03.506158   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:03.506224   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:03.542369   61989 cri.go:89] found id: ""
	I0924 01:07:03.542397   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.542408   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:03.542416   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:03.542473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:03.575019   61989 cri.go:89] found id: ""
	I0924 01:07:03.575046   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.575055   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:03.575060   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:03.575103   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:03.608576   61989 cri.go:89] found id: ""
	I0924 01:07:03.608603   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.608612   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:03.608619   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:03.608684   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:03.642359   61989 cri.go:89] found id: ""
	I0924 01:07:03.642389   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.642400   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:03.642407   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:03.642463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:03.678192   61989 cri.go:89] found id: ""
	I0924 01:07:03.678216   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.678223   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:03.678229   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:03.678285   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:03.711773   61989 cri.go:89] found id: ""
	I0924 01:07:03.711795   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.711803   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:03.711809   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:03.711856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:03.747792   61989 cri.go:89] found id: ""
	I0924 01:07:03.747819   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.747830   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:03.747838   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:03.747901   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:03.783284   61989 cri.go:89] found id: ""
	I0924 01:07:03.783312   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.783320   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:03.783331   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:03.783349   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:03.838704   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:03.838745   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:03.852650   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:03.852675   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:03.922474   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:03.922499   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:03.922511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:03.997349   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:03.997388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:05.228357   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:07.228789   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.028604   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:08.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.908041   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:09.406803   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.537182   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:06.549745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:06.549833   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:06.587879   61989 cri.go:89] found id: ""
	I0924 01:07:06.587910   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.587922   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:06.587930   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:06.587984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:06.623419   61989 cri.go:89] found id: ""
	I0924 01:07:06.623447   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.623456   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:06.623462   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:06.623542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:06.659228   61989 cri.go:89] found id: ""
	I0924 01:07:06.659260   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.659272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:06.659280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:06.659341   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:06.693300   61989 cri.go:89] found id: ""
	I0924 01:07:06.693330   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.693341   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:06.693349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:06.693399   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:06.726237   61989 cri.go:89] found id: ""
	I0924 01:07:06.726267   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.726278   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:06.726286   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:06.726342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:06.760627   61989 cri.go:89] found id: ""
	I0924 01:07:06.760659   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.760670   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:06.760677   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:06.760745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:06.796029   61989 cri.go:89] found id: ""
	I0924 01:07:06.796062   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.796073   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:06.796081   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:06.796136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:06.830197   61989 cri.go:89] found id: ""
	I0924 01:07:06.830230   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.830241   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:06.830251   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:06.830265   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.869055   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:06.869087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:06.923840   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:06.923888   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:06.937510   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:06.937549   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:07.011461   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:07.011482   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:07.011496   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:09.591186   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:09.603900   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:09.603970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:09.639003   61989 cri.go:89] found id: ""
	I0924 01:07:09.639035   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.639046   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:09.639055   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:09.639111   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:09.676494   61989 cri.go:89] found id: ""
	I0924 01:07:09.676528   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.676539   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:09.676547   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:09.676616   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:09.713080   61989 cri.go:89] found id: ""
	I0924 01:07:09.713103   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.713111   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:09.713117   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:09.713174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:09.748425   61989 cri.go:89] found id: ""
	I0924 01:07:09.748449   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.748458   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:09.748465   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:09.748521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:09.782526   61989 cri.go:89] found id: ""
	I0924 01:07:09.782559   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.782576   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:09.782584   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:09.782647   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:09.819137   61989 cri.go:89] found id: ""
	I0924 01:07:09.819159   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.819167   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:09.819173   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:09.819256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:09.852953   61989 cri.go:89] found id: ""
	I0924 01:07:09.852976   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.852984   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:09.852989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:09.853083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:09.887254   61989 cri.go:89] found id: ""
	I0924 01:07:09.887282   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.887293   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:09.887304   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:09.887318   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:09.940029   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:09.940069   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:09.954298   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:09.954331   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:10.028926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:10.028947   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:10.028957   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:10.116722   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:10.116761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:09.728996   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.228342   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:10.527637   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.528324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:11.410452   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:13.906451   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.654245   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:12.668635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:12.668695   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:12.711575   61989 cri.go:89] found id: ""
	I0924 01:07:12.711601   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.711626   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:12.711632   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:12.711682   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:12.746104   61989 cri.go:89] found id: ""
	I0924 01:07:12.746131   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.746141   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:12.746149   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:12.746210   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:12.780229   61989 cri.go:89] found id: ""
	I0924 01:07:12.780260   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.780295   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:12.780303   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:12.780384   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:12.812968   61989 cri.go:89] found id: ""
	I0924 01:07:12.812998   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.813010   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:12.813024   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:12.813090   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:12.844212   61989 cri.go:89] found id: ""
	I0924 01:07:12.844241   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.844253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:12.844260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:12.844343   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:12.878662   61989 cri.go:89] found id: ""
	I0924 01:07:12.878690   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.878700   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:12.878707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:12.878765   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:12.912782   61989 cri.go:89] found id: ""
	I0924 01:07:12.912805   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.912815   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:12.912822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:12.912883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:12.945694   61989 cri.go:89] found id: ""
	I0924 01:07:12.945726   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.945736   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:12.945747   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:12.945761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:12.994841   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:12.994877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:13.009582   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:13.009624   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:13.081972   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:13.081999   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:13.082017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:13.162383   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:13.162420   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:15.704586   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:15.717608   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:15.717677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:15.751794   61989 cri.go:89] found id: ""
	I0924 01:07:15.751829   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.751840   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:15.751848   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:15.751916   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:15.791691   61989 cri.go:89] found id: ""
	I0924 01:07:15.791723   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.791734   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:15.791742   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:15.791805   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:15.827934   61989 cri.go:89] found id: ""
	I0924 01:07:15.827957   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.827965   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:15.827971   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:15.828017   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:15.862489   61989 cri.go:89] found id: ""
	I0924 01:07:15.862518   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.862527   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:15.862532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:15.862577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:15.896754   61989 cri.go:89] found id: ""
	I0924 01:07:15.896786   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.896798   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:15.896804   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:15.896857   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:15.934353   61989 cri.go:89] found id: ""
	I0924 01:07:15.934378   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.934386   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:15.934392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:15.934436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:15.969204   61989 cri.go:89] found id: ""
	I0924 01:07:15.969237   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.969246   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:15.969251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:15.969309   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:14.228949   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.728382   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.027681   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:17.027847   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.907872   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:18.407563   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.008733   61989 cri.go:89] found id: ""
	I0924 01:07:16.008767   61989 logs.go:276] 0 containers: []
	W0924 01:07:16.008780   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:16.008792   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:16.008807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:16.046993   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:16.047024   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:16.098768   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:16.098801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:16.114429   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:16.114472   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:16.187450   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:16.187469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:16.187489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:18.767042   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:18.779825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:18.779899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:18.815410   61989 cri.go:89] found id: ""
	I0924 01:07:18.815436   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.815447   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:18.815454   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:18.815523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:18.849837   61989 cri.go:89] found id: ""
	I0924 01:07:18.849862   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.849872   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:18.849880   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:18.849952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:18.885183   61989 cri.go:89] found id: ""
	I0924 01:07:18.885215   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.885227   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:18.885235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:18.885314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:18.922263   61989 cri.go:89] found id: ""
	I0924 01:07:18.922293   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.922305   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:18.922312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:18.922378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:18.957235   61989 cri.go:89] found id: ""
	I0924 01:07:18.957263   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.957272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:18.957278   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:18.957331   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:18.989846   61989 cri.go:89] found id: ""
	I0924 01:07:18.989870   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.989878   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:18.989884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:18.989931   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:19.027264   61989 cri.go:89] found id: ""
	I0924 01:07:19.027298   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.027308   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:19.027315   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:19.027373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:19.065902   61989 cri.go:89] found id: ""
	I0924 01:07:19.065925   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.065934   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:19.065944   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:19.065959   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:19.115515   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:19.115550   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:19.129761   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:19.129787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:19.200299   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:19.200319   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:19.200351   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:19.282308   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:19.282360   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:18.732314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.227773   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.228957   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:19.528117   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:22.028965   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:20.906860   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.407404   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.819442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:21.834106   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:21.834165   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:21.866953   61989 cri.go:89] found id: ""
	I0924 01:07:21.866988   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.866999   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:21.867008   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:21.867085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:21.902561   61989 cri.go:89] found id: ""
	I0924 01:07:21.902637   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.902654   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:21.902663   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:21.902729   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:21.936883   61989 cri.go:89] found id: ""
	I0924 01:07:21.936926   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.936937   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:21.936943   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:21.936995   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:21.975375   61989 cri.go:89] found id: ""
	I0924 01:07:21.975402   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.975411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:21.975417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:21.975465   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:22.012782   61989 cri.go:89] found id: ""
	I0924 01:07:22.012811   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.012822   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:22.012830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:22.012890   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:22.049344   61989 cri.go:89] found id: ""
	I0924 01:07:22.049370   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.049379   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:22.049385   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:22.049442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:22.088187   61989 cri.go:89] found id: ""
	I0924 01:07:22.088219   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.088230   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:22.088239   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:22.088324   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:22.123357   61989 cri.go:89] found id: ""
	I0924 01:07:22.123386   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.123397   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:22.123408   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:22.123423   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:22.176794   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:22.176828   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:22.192550   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:22.192591   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:22.263854   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:22.263881   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:22.263898   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:22.341735   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:22.341778   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:24.879834   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:24.892429   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:24.892504   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:24.926600   61989 cri.go:89] found id: ""
	I0924 01:07:24.926629   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.926636   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:24.926642   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:24.926689   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:24.960370   61989 cri.go:89] found id: ""
	I0924 01:07:24.960399   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.960408   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:24.960415   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:24.960471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:24.993503   61989 cri.go:89] found id: ""
	I0924 01:07:24.993532   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.993542   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:24.993549   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:24.993611   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:25.028027   61989 cri.go:89] found id: ""
	I0924 01:07:25.028055   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.028065   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:25.028073   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:25.028129   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:25.062947   61989 cri.go:89] found id: ""
	I0924 01:07:25.062981   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.062999   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:25.063009   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:25.063077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:25.098895   61989 cri.go:89] found id: ""
	I0924 01:07:25.098927   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.098939   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:25.098946   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:25.098996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:25.132786   61989 cri.go:89] found id: ""
	I0924 01:07:25.132814   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.132824   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:25.132830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:25.132882   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:25.167603   61989 cri.go:89] found id: ""
	I0924 01:07:25.167634   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.167644   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:25.167656   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:25.167671   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:25.220265   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:25.220303   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:25.234840   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:25.234884   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:25.307459   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:25.307485   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:25.307499   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:25.386496   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:25.386537   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:25.229188   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.728978   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:24.531829   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.027182   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:29.029000   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:25.907018   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:28.406555   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.926064   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:27.939398   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:27.939480   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:27.976184   61989 cri.go:89] found id: ""
	I0924 01:07:27.976215   61989 logs.go:276] 0 containers: []
	W0924 01:07:27.976256   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:27.976265   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:27.976348   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:28.009389   61989 cri.go:89] found id: ""
	I0924 01:07:28.009419   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.009431   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:28.009438   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:28.009501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:28.045562   61989 cri.go:89] found id: ""
	I0924 01:07:28.045594   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.045605   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:28.045613   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:28.045677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:28.085318   61989 cri.go:89] found id: ""
	I0924 01:07:28.085345   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.085357   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:28.085364   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:28.085419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:28.119582   61989 cri.go:89] found id: ""
	I0924 01:07:28.119607   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.119617   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:28.119626   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:28.119690   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:28.151445   61989 cri.go:89] found id: ""
	I0924 01:07:28.151493   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.151505   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:28.151513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:28.151578   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:28.185966   61989 cri.go:89] found id: ""
	I0924 01:07:28.185997   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.186009   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:28.186016   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:28.186078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:28.219012   61989 cri.go:89] found id: ""
	I0924 01:07:28.219037   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.219044   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:28.219052   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:28.219089   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:28.272186   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:28.272222   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:28.286346   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:28.286383   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:28.370949   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:28.370975   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:28.370985   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:28.453740   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:28.453775   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:30.229141   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.728919   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:31.527080   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.028315   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.407040   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.407075   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.407711   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.993536   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:31.006297   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:31.006369   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:31.042081   61989 cri.go:89] found id: ""
	I0924 01:07:31.042114   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.042123   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:31.042129   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:31.042185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:31.077119   61989 cri.go:89] found id: ""
	I0924 01:07:31.077144   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.077153   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:31.077159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:31.077208   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:31.110148   61989 cri.go:89] found id: ""
	I0924 01:07:31.110179   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.110187   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:31.110193   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:31.110246   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:31.143551   61989 cri.go:89] found id: ""
	I0924 01:07:31.143578   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.143585   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:31.143591   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:31.143638   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:31.177212   61989 cri.go:89] found id: ""
	I0924 01:07:31.177262   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.177272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:31.177279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:31.177329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:31.209290   61989 cri.go:89] found id: ""
	I0924 01:07:31.209321   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.209332   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:31.209340   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:31.209398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:31.247299   61989 cri.go:89] found id: ""
	I0924 01:07:31.247334   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.247346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:31.247355   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:31.247419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:31.285010   61989 cri.go:89] found id: ""
	I0924 01:07:31.285047   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.285060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:31.285072   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:31.285087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:31.323819   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:31.323855   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:31.378348   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:31.378388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:31.393944   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:31.393983   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:31.464940   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:31.464966   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:31.464978   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.042144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:34.055183   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:34.055268   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:34.103044   61989 cri.go:89] found id: ""
	I0924 01:07:34.103075   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.103086   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:34.103094   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:34.103162   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:34.141379   61989 cri.go:89] found id: ""
	I0924 01:07:34.141412   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.141424   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:34.141432   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:34.141493   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:34.179545   61989 cri.go:89] found id: ""
	I0924 01:07:34.179574   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.179582   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:34.179588   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:34.179655   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:34.217683   61989 cri.go:89] found id: ""
	I0924 01:07:34.217719   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.217739   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:34.217748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:34.217806   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:34.257597   61989 cri.go:89] found id: ""
	I0924 01:07:34.257630   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.257642   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:34.257651   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:34.257723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:34.295410   61989 cri.go:89] found id: ""
	I0924 01:07:34.295440   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.295452   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:34.295460   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:34.295523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:34.331309   61989 cri.go:89] found id: ""
	I0924 01:07:34.331340   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.331350   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:34.331358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:34.331460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:34.367549   61989 cri.go:89] found id: ""
	I0924 01:07:34.367580   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.367590   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:34.367601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:34.367615   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:34.421785   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:34.421823   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:34.435162   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:34.435198   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:34.504051   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:34.504073   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:34.504090   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.582343   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:34.582384   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:35.229391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.229522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.527047   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.527472   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.906974   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.907529   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.124727   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:37.139374   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:37.139431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:37.176474   61989 cri.go:89] found id: ""
	I0924 01:07:37.176500   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.176510   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:37.176515   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:37.176560   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:37.209944   61989 cri.go:89] found id: ""
	I0924 01:07:37.209971   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.209983   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:37.209990   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:37.210055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:37.242894   61989 cri.go:89] found id: ""
	I0924 01:07:37.242923   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.242933   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:37.242941   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:37.242996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:37.276517   61989 cri.go:89] found id: ""
	I0924 01:07:37.276547   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.276558   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:37.276566   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:37.276626   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:37.310169   61989 cri.go:89] found id: ""
	I0924 01:07:37.310196   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.310207   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:37.310214   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:37.310282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:37.342992   61989 cri.go:89] found id: ""
	I0924 01:07:37.343019   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.343027   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:37.343035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:37.343088   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:37.375024   61989 cri.go:89] found id: ""
	I0924 01:07:37.375051   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.375062   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:37.375069   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:37.375137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:37.409736   61989 cri.go:89] found id: ""
	I0924 01:07:37.409761   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.409768   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:37.409776   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:37.409787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:37.474744   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:37.474767   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:37.474783   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:37.551479   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:37.551515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.590597   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:37.590632   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:37.642781   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:37.642820   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.156480   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:40.171002   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:40.171079   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:40.207383   61989 cri.go:89] found id: ""
	I0924 01:07:40.207410   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.207418   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:40.207424   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:40.207474   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:40.245535   61989 cri.go:89] found id: ""
	I0924 01:07:40.245560   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.245568   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:40.245574   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:40.245620   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:40.283858   61989 cri.go:89] found id: ""
	I0924 01:07:40.283888   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.283900   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:40.283909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:40.283982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:40.320527   61989 cri.go:89] found id: ""
	I0924 01:07:40.320555   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.320566   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:40.320575   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:40.320633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:40.354364   61989 cri.go:89] found id: ""
	I0924 01:07:40.354390   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.354397   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:40.354403   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:40.354473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:40.388407   61989 cri.go:89] found id: ""
	I0924 01:07:40.388431   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.388439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:40.388444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:40.388512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:40.423809   61989 cri.go:89] found id: ""
	I0924 01:07:40.423838   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.423847   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:40.423853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:40.423908   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:40.459160   61989 cri.go:89] found id: ""
	I0924 01:07:40.459188   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.459199   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:40.459210   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:40.459223   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:40.530418   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:40.530456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.551644   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:40.551683   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:40.634564   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:40.634587   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:40.634599   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:40.717897   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:40.717934   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:39.728642   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.728725   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:40.528294   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.028364   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.406835   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.907015   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.257992   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:43.272134   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:43.272204   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:43.306747   61989 cri.go:89] found id: ""
	I0924 01:07:43.306775   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.306797   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:43.306806   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:43.306923   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:43.342922   61989 cri.go:89] found id: ""
	I0924 01:07:43.342954   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.342963   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:43.342974   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:43.343028   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:43.378666   61989 cri.go:89] found id: ""
	I0924 01:07:43.378694   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.378703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:43.378709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:43.378760   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:43.414348   61989 cri.go:89] found id: ""
	I0924 01:07:43.414376   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.414387   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:43.414395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:43.414457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:43.447687   61989 cri.go:89] found id: ""
	I0924 01:07:43.447718   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.447728   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:43.447735   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:43.447804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:43.482166   61989 cri.go:89] found id: ""
	I0924 01:07:43.482195   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.482205   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:43.482211   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:43.482275   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:43.518112   61989 cri.go:89] found id: ""
	I0924 01:07:43.518146   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.518159   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:43.518167   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:43.518231   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:43.553853   61989 cri.go:89] found id: ""
	I0924 01:07:43.553875   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.553883   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:43.553891   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:43.553902   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:43.603410   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:43.603445   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:43.616413   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:43.616438   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:43.685077   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:43.685101   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:43.685113   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:43.760758   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:43.760803   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:43.729237   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.228084   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.228503   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:45.527095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:47.529540   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.407150   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.407253   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.300532   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:46.315982   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:46.316050   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:46.356523   61989 cri.go:89] found id: ""
	I0924 01:07:46.356554   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.356565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:46.356573   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:46.356633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:46.405399   61989 cri.go:89] found id: ""
	I0924 01:07:46.405429   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.405439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:46.405447   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:46.405512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:46.454819   61989 cri.go:89] found id: ""
	I0924 01:07:46.454844   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.454853   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:46.454858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:46.454918   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:46.499094   61989 cri.go:89] found id: ""
	I0924 01:07:46.499123   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.499134   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:46.499142   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:46.499196   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:46.532976   61989 cri.go:89] found id: ""
	I0924 01:07:46.533006   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.533017   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:46.533025   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:46.533083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:46.565488   61989 cri.go:89] found id: ""
	I0924 01:07:46.565523   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.565534   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:46.565546   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:46.565610   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:46.598457   61989 cri.go:89] found id: ""
	I0924 01:07:46.598486   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.598496   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:46.598503   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:46.598551   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:46.631892   61989 cri.go:89] found id: ""
	I0924 01:07:46.631920   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.631931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:46.631941   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:46.631956   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:46.709966   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:46.710013   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.749154   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:46.749184   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:46.798192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:46.798228   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:46.811902   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:46.811951   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:46.885878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.386775   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:49.399324   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:49.399383   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:49.437061   61989 cri.go:89] found id: ""
	I0924 01:07:49.437092   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.437104   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:49.437111   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:49.437160   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:49.470882   61989 cri.go:89] found id: ""
	I0924 01:07:49.470908   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.470919   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:49.470927   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:49.470989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:49.506894   61989 cri.go:89] found id: ""
	I0924 01:07:49.506926   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.506938   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:49.506947   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:49.507018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:49.540768   61989 cri.go:89] found id: ""
	I0924 01:07:49.540800   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.540813   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:49.540822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:49.540888   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:49.576486   61989 cri.go:89] found id: ""
	I0924 01:07:49.576515   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.576523   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:49.576530   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:49.576579   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:49.612456   61989 cri.go:89] found id: ""
	I0924 01:07:49.612479   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.612487   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:49.612495   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:49.612542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:49.646085   61989 cri.go:89] found id: ""
	I0924 01:07:49.646118   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.646127   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:49.646132   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:49.646178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:49.682538   61989 cri.go:89] found id: ""
	I0924 01:07:49.682565   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.682574   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:49.682583   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:49.682594   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:49.721791   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:49.721817   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:49.774842   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:49.774889   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:49.789082   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:49.789129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:49.866437   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.866464   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:49.866478   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:50.727581   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.027396   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.028176   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.407654   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.908118   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.445166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:52.459060   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:52.459126   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:52.496521   61989 cri.go:89] found id: ""
	I0924 01:07:52.496550   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.496562   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:52.496571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:52.496652   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:52.533575   61989 cri.go:89] found id: ""
	I0924 01:07:52.533600   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.533608   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:52.533615   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:52.533693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:52.571666   61989 cri.go:89] found id: ""
	I0924 01:07:52.571693   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.571703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:52.571710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:52.571758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:52.603929   61989 cri.go:89] found id: ""
	I0924 01:07:52.603957   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.603968   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:52.603976   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:52.604034   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:52.635581   61989 cri.go:89] found id: ""
	I0924 01:07:52.635607   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.635614   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:52.635620   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:52.635669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:52.673865   61989 cri.go:89] found id: ""
	I0924 01:07:52.673889   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.673897   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:52.673903   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:52.673953   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:52.709885   61989 cri.go:89] found id: ""
	I0924 01:07:52.709910   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.709918   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:52.709925   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:52.709986   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:52.746409   61989 cri.go:89] found id: ""
	I0924 01:07:52.746439   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.746450   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:52.746461   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:52.746475   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:52.798020   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:52.798054   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:52.811940   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:52.811967   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:52.888091   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:52.888114   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:52.888129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.968955   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:52.969000   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:55.507204   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:55.520581   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:55.520657   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:55.555772   61989 cri.go:89] found id: ""
	I0924 01:07:55.555809   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.555821   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:55.555828   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:55.555880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:55.593765   61989 cri.go:89] found id: ""
	I0924 01:07:55.593791   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.593802   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:55.593808   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:55.593866   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:55.630292   61989 cri.go:89] found id: ""
	I0924 01:07:55.630325   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.630337   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:55.630344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:55.630408   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:55.665703   61989 cri.go:89] found id: ""
	I0924 01:07:55.665730   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.665741   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:55.665748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:55.665807   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:55.701911   61989 cri.go:89] found id: ""
	I0924 01:07:55.701938   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.701949   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:55.701957   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:55.702020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:55.734343   61989 cri.go:89] found id: ""
	I0924 01:07:55.734373   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.734385   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:55.734394   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:55.734460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:55.768606   61989 cri.go:89] found id: ""
	I0924 01:07:55.768633   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.768645   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:55.768653   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:55.768716   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:55.800720   61989 cri.go:89] found id: ""
	I0924 01:07:55.800747   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.800757   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:55.800768   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:55.800782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:55.851702   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:55.851737   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:55.865657   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:55.865687   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:55.940175   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:55.940197   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:55.940207   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:55.227954   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.228969   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:54.528417   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.529326   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:59.027653   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:55.407038   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.906886   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.015832   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:56.015870   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:58.557571   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:58.572208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:58.572274   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:58.605081   61989 cri.go:89] found id: ""
	I0924 01:07:58.605109   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.605121   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:58.605128   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:58.605185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:58.641518   61989 cri.go:89] found id: ""
	I0924 01:07:58.641548   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.641559   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:58.641566   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:58.641617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:58.680623   61989 cri.go:89] found id: ""
	I0924 01:07:58.680653   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.680664   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:58.680675   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:58.680735   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:58.713658   61989 cri.go:89] found id: ""
	I0924 01:07:58.713684   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.713693   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:58.713700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:58.713754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:58.746264   61989 cri.go:89] found id: ""
	I0924 01:07:58.746298   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.746307   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:58.746313   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:58.746358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:58.779812   61989 cri.go:89] found id: ""
	I0924 01:07:58.779846   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.779912   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:58.779924   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:58.779984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:58.813203   61989 cri.go:89] found id: ""
	I0924 01:07:58.813236   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.813245   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:58.813252   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:58.813303   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:58.845872   61989 cri.go:89] found id: ""
	I0924 01:07:58.845898   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.845906   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:58.845915   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:58.845925   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:58.897480   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:58.897515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:58.912904   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:58.912936   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:58.982882   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:58.982908   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:58.982921   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:59.058495   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:59.058535   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:59.729215   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.228358   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.028678   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:03.527682   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:00.407897   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.907608   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:04.907717   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.596672   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:01.609550   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:01.609625   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:01.648819   61989 cri.go:89] found id: ""
	I0924 01:08:01.648847   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.648857   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:01.648864   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:01.649000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:01.685419   61989 cri.go:89] found id: ""
	I0924 01:08:01.685450   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.685458   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:01.685464   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:01.685533   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:01.720426   61989 cri.go:89] found id: ""
	I0924 01:08:01.720455   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.720464   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:01.720473   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:01.720537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:01.755292   61989 cri.go:89] found id: ""
	I0924 01:08:01.755316   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.755324   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:01.755331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:01.755398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:01.788673   61989 cri.go:89] found id: ""
	I0924 01:08:01.788703   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.788713   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:01.788721   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:01.788789   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:01.824724   61989 cri.go:89] found id: ""
	I0924 01:08:01.824761   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.824773   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:01.824781   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:01.824838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:01.858492   61989 cri.go:89] found id: ""
	I0924 01:08:01.858531   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.858542   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:01.858556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:01.858623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:01.892135   61989 cri.go:89] found id: ""
	I0924 01:08:01.892167   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.892177   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:01.892192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:01.892205   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:01.905820   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:01.905849   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:01.977998   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:01.978026   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:01.978039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:02.060441   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:02.060480   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:02.100029   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:02.100057   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.653124   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:04.665726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:04.665784   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:04.700755   61989 cri.go:89] found id: ""
	I0924 01:08:04.700785   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.700796   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:04.700804   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:04.700858   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:04.736955   61989 cri.go:89] found id: ""
	I0924 01:08:04.736983   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.736992   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:04.736998   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:04.737051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:04.770940   61989 cri.go:89] found id: ""
	I0924 01:08:04.770969   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.770977   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:04.770983   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:04.771051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:04.805376   61989 cri.go:89] found id: ""
	I0924 01:08:04.805403   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.805411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:04.805417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:04.805471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:04.840995   61989 cri.go:89] found id: ""
	I0924 01:08:04.841016   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.841024   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:04.841030   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:04.841077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:04.875418   61989 cri.go:89] found id: ""
	I0924 01:08:04.875449   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.875460   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:04.875468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:04.875546   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:04.910675   61989 cri.go:89] found id: ""
	I0924 01:08:04.910696   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.910704   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:04.910710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:04.910764   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:04.945531   61989 cri.go:89] found id: ""
	I0924 01:08:04.945562   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.945570   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:04.945578   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:04.945589   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.997696   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:04.997734   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:05.011296   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:05.011329   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:05.087878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:05.087905   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:05.087919   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:05.164073   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:05.164111   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:04.228985   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.734525   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.031377   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:08.528160   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.908017   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:09.407255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:07.713496   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:07.726590   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:07.726649   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:07.760050   61989 cri.go:89] found id: ""
	I0924 01:08:07.760081   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.760092   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:07.760100   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:07.760152   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:07.797709   61989 cri.go:89] found id: ""
	I0924 01:08:07.797736   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.797744   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:07.797749   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:07.797803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:07.836351   61989 cri.go:89] found id: ""
	I0924 01:08:07.836380   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.836391   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:07.836399   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:07.836471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:07.871133   61989 cri.go:89] found id: ""
	I0924 01:08:07.871159   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.871170   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:07.871178   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:07.871229   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:07.906640   61989 cri.go:89] found id: ""
	I0924 01:08:07.906663   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.906673   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:07.906682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:07.906741   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:07.940919   61989 cri.go:89] found id: ""
	I0924 01:08:07.940945   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.940953   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:07.940959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:07.941018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:07.975533   61989 cri.go:89] found id: ""
	I0924 01:08:07.975562   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.975570   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:07.975576   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:07.975627   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:08.009137   61989 cri.go:89] found id: ""
	I0924 01:08:08.009163   61989 logs.go:276] 0 containers: []
	W0924 01:08:08.009173   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:08.009183   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:08.009196   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:08.065199   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:08.065252   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:08.080159   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:08.080188   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:08.154003   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:08.154025   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:08.154039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:08.235522   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:08.235561   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:10.774666   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:10.787704   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:10.787775   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:10.822721   61989 cri.go:89] found id: ""
	I0924 01:08:10.822759   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.822770   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:10.822781   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:10.822852   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:10.857113   61989 cri.go:89] found id: ""
	I0924 01:08:10.857138   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.857146   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:10.857152   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:10.857201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:10.890974   61989 cri.go:89] found id: ""
	I0924 01:08:10.891001   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.891012   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:10.891020   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:10.891086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:10.929771   61989 cri.go:89] found id: ""
	I0924 01:08:10.929793   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.929800   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:10.929806   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:10.929851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:10.961988   61989 cri.go:89] found id: ""
	I0924 01:08:10.962015   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.962027   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:10.962035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:10.962100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:09.228600   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.729142   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.528626   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.027656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.906981   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.907232   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.993591   61989 cri.go:89] found id: ""
	I0924 01:08:10.993622   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.993633   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:10.993639   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:10.993691   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:11.032468   61989 cri.go:89] found id: ""
	I0924 01:08:11.032496   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.032506   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:11.032514   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:11.032576   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:11.066900   61989 cri.go:89] found id: ""
	I0924 01:08:11.066924   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.066931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:11.066939   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:11.066950   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:11.136412   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:11.136443   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:11.136459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:11.218326   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:11.218361   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:11.260695   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:11.260728   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:11.310102   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:11.310133   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:13.825540   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:13.838208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:13.838283   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:13.874539   61989 cri.go:89] found id: ""
	I0924 01:08:13.874567   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.874576   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:13.874581   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:13.874628   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:13.911818   61989 cri.go:89] found id: ""
	I0924 01:08:13.911839   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.911846   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:13.911852   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:13.911897   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:13.944766   61989 cri.go:89] found id: ""
	I0924 01:08:13.944789   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.944797   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:13.944802   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:13.944847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:13.980712   61989 cri.go:89] found id: ""
	I0924 01:08:13.980742   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.980752   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:13.980758   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:13.980817   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:14.016103   61989 cri.go:89] found id: ""
	I0924 01:08:14.016130   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.016138   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:14.016143   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:14.016192   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:14.051884   61989 cri.go:89] found id: ""
	I0924 01:08:14.051929   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.051943   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:14.051954   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:14.052046   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:14.088928   61989 cri.go:89] found id: ""
	I0924 01:08:14.088954   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.088964   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:14.088970   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:14.089020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:14.123057   61989 cri.go:89] found id: ""
	I0924 01:08:14.123083   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.123091   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:14.123099   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:14.123112   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:14.174249   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:14.174287   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:14.188409   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:14.188442   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:14.258906   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:14.258932   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:14.258942   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:14.340891   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:14.340928   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:14.229459   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:16.728316   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.028158   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.527615   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.907490   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.907845   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.901512   61323 pod_ready.go:82] duration metric: took 4m0.001092501s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:19.901552   61323 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:08:19.901576   61323 pod_ready.go:39] duration metric: took 4m10.04955154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:19.901606   61323 kubeadm.go:597] duration metric: took 4m18.184472182s to restartPrimaryControlPlane
	W0924 01:08:19.901701   61323 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:19.901736   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:16.877728   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:16.890548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:16.890617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:16.924414   61989 cri.go:89] found id: ""
	I0924 01:08:16.924439   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.924451   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:16.924458   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:16.924510   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:16.960295   61989 cri.go:89] found id: ""
	I0924 01:08:16.960323   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.960344   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:16.960352   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:16.960405   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:16.993171   61989 cri.go:89] found id: ""
	I0924 01:08:16.993204   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.993216   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:16.993224   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:16.993287   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:17.028122   61989 cri.go:89] found id: ""
	I0924 01:08:17.028150   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.028160   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:17.028169   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:17.028261   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:17.068401   61989 cri.go:89] found id: ""
	I0924 01:08:17.068440   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.068451   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:17.068458   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:17.068530   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:17.104250   61989 cri.go:89] found id: ""
	I0924 01:08:17.104275   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.104283   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:17.104299   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:17.104370   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:17.139178   61989 cri.go:89] found id: ""
	I0924 01:08:17.139201   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.139209   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:17.139215   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:17.139288   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:17.172677   61989 cri.go:89] found id: ""
	I0924 01:08:17.172703   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.172712   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:17.172727   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:17.172742   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:17.222039   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:17.222082   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:17.235342   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:17.235371   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:17.300313   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:17.300350   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:17.300366   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:17.382465   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:17.382517   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.924928   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:19.941406   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:19.941496   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:19.976196   61989 cri.go:89] found id: ""
	I0924 01:08:19.976224   61989 logs.go:276] 0 containers: []
	W0924 01:08:19.976238   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:19.976247   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:19.976314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:20.019652   61989 cri.go:89] found id: ""
	I0924 01:08:20.019680   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.019692   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:20.019699   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:20.019757   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:20.055098   61989 cri.go:89] found id: ""
	I0924 01:08:20.055123   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.055130   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:20.055135   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:20.055183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:20.091428   61989 cri.go:89] found id: ""
	I0924 01:08:20.091458   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.091469   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:20.091476   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:20.091532   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:20.123608   61989 cri.go:89] found id: ""
	I0924 01:08:20.123642   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.123653   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:20.123678   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:20.123745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:20.165885   61989 cri.go:89] found id: ""
	I0924 01:08:20.165913   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.165926   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:20.165934   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:20.165985   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:20.199300   61989 cri.go:89] found id: ""
	I0924 01:08:20.199329   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.199341   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:20.199348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:20.199415   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:20.237201   61989 cri.go:89] found id: ""
	I0924 01:08:20.237253   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.237262   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:20.237271   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:20.237284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:20.285008   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:20.285049   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:20.298974   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:20.299014   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:20.385765   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:20.385793   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:20.385807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:20.460715   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:20.460752   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.227947   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.228448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.229022   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.527785   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.528095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.528420   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.000163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:23.014755   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:23.014828   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:23.048877   61989 cri.go:89] found id: ""
	I0924 01:08:23.048909   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.048921   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:23.048979   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:23.049049   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:23.085614   61989 cri.go:89] found id: ""
	I0924 01:08:23.085643   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.085650   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:23.085658   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:23.085718   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:23.122027   61989 cri.go:89] found id: ""
	I0924 01:08:23.122060   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.122071   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:23.122078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:23.122136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:23.156838   61989 cri.go:89] found id: ""
	I0924 01:08:23.156868   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.156879   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:23.156887   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:23.156947   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:23.191528   61989 cri.go:89] found id: ""
	I0924 01:08:23.191569   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.191579   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:23.191586   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:23.191651   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:23.227627   61989 cri.go:89] found id: ""
	I0924 01:08:23.227651   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.227659   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:23.227665   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:23.227709   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:23.261937   61989 cri.go:89] found id: ""
	I0924 01:08:23.261968   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.261980   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:23.261988   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:23.262039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:23.297947   61989 cri.go:89] found id: ""
	I0924 01:08:23.297973   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.297986   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:23.297997   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:23.298009   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.337783   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:23.337811   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:23.390767   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:23.390808   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:23.404787   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:23.404814   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:23.478768   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:23.478788   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:23.478801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:25.728154   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.227795   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:25.529710   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.028153   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:26.060593   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:26.085071   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:26.085137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:26.121785   61989 cri.go:89] found id: ""
	I0924 01:08:26.121814   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.121826   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:26.121834   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:26.121900   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:26.167942   61989 cri.go:89] found id: ""
	I0924 01:08:26.167971   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.167980   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:26.167991   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:26.168054   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:26.206461   61989 cri.go:89] found id: ""
	I0924 01:08:26.206496   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.206506   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:26.206513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:26.206582   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:26.243094   61989 cri.go:89] found id: ""
	I0924 01:08:26.243125   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.243136   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:26.243144   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:26.243206   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:26.279303   61989 cri.go:89] found id: ""
	I0924 01:08:26.279331   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.279341   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:26.279348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:26.279407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:26.311840   61989 cri.go:89] found id: ""
	I0924 01:08:26.311869   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.311880   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:26.311888   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:26.311954   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:26.345994   61989 cri.go:89] found id: ""
	I0924 01:08:26.346019   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.346027   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:26.346033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:26.346082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:26.380570   61989 cri.go:89] found id: ""
	I0924 01:08:26.380601   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.380610   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:26.380619   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:26.380630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:26.429958   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:26.429993   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:26.443278   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:26.443312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:26.516353   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:26.516375   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:26.516390   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.603310   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:26.603345   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.142531   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:29.156548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:29.156634   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:29.191351   61989 cri.go:89] found id: ""
	I0924 01:08:29.191378   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.191389   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:29.191396   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:29.191451   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:29.232112   61989 cri.go:89] found id: ""
	I0924 01:08:29.232141   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.232152   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:29.232159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:29.232214   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:29.266082   61989 cri.go:89] found id: ""
	I0924 01:08:29.266104   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.266112   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:29.266118   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:29.266178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:29.299777   61989 cri.go:89] found id: ""
	I0924 01:08:29.299802   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.299812   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:29.299817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:29.299883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:29.342709   61989 cri.go:89] found id: ""
	I0924 01:08:29.342740   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.342749   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:29.342756   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:29.342816   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:29.381255   61989 cri.go:89] found id: ""
	I0924 01:08:29.381303   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.381312   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:29.381318   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:29.381375   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:29.414998   61989 cri.go:89] found id: ""
	I0924 01:08:29.415028   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.415036   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:29.415043   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:29.415101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:29.448553   61989 cri.go:89] found id: ""
	I0924 01:08:29.448580   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.448589   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:29.448598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:29.448608   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:29.534936   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:29.535001   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.573554   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:29.573584   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:29.623590   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:29.623626   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:29.636141   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:29.636167   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:29.700591   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:30.228993   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.229458   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:30.528150   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:33.029011   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.201184   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:32.215034   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:32.215102   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:32.250990   61989 cri.go:89] found id: ""
	I0924 01:08:32.251016   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.251026   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:32.251033   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:32.251104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:32.284448   61989 cri.go:89] found id: ""
	I0924 01:08:32.284483   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.284494   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:32.284504   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:32.284570   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:32.317979   61989 cri.go:89] found id: ""
	I0924 01:08:32.318004   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.318015   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:32.318022   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:32.318078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:32.352057   61989 cri.go:89] found id: ""
	I0924 01:08:32.352082   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.352093   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:32.352101   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:32.352163   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:32.385459   61989 cri.go:89] found id: ""
	I0924 01:08:32.385482   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.385490   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:32.385496   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:32.385544   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:32.421189   61989 cri.go:89] found id: ""
	I0924 01:08:32.421217   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.421227   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:32.421235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:32.421307   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:32.464375   61989 cri.go:89] found id: ""
	I0924 01:08:32.464399   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.464406   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:32.464412   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:32.464457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:32.512716   61989 cri.go:89] found id: ""
	I0924 01:08:32.512742   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.512753   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:32.512763   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:32.512788   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:32.598271   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.598293   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:32.598305   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:32.674197   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:32.674233   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:32.715065   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:32.715092   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:32.767522   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:32.767565   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.281678   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:35.296302   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:35.296390   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:35.336341   61989 cri.go:89] found id: ""
	I0924 01:08:35.336370   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.336381   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:35.336397   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:35.336454   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:35.373090   61989 cri.go:89] found id: ""
	I0924 01:08:35.373118   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.373127   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:35.373135   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:35.373201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:35.413628   61989 cri.go:89] found id: ""
	I0924 01:08:35.413660   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.413668   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:35.413674   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:35.413720   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:35.446564   61989 cri.go:89] found id: ""
	I0924 01:08:35.446592   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.446603   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:35.446610   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:35.446669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:35.478389   61989 cri.go:89] found id: ""
	I0924 01:08:35.478424   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.478435   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:35.478444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:35.478515   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:35.513992   61989 cri.go:89] found id: ""
	I0924 01:08:35.514015   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.514023   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:35.514029   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:35.514085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:35.556442   61989 cri.go:89] found id: ""
	I0924 01:08:35.556471   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.556481   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:35.556489   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:35.556571   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:35.594205   61989 cri.go:89] found id: ""
	I0924 01:08:35.594228   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.594236   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:35.594244   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:35.594254   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:35.637601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:35.637640   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:35.691674   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:35.691711   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.705223   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:35.705261   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:35.784000   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:35.784021   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:35.784036   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:34.729064   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:37.227314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:35.528382   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.028508   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.370232   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:38.383287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:38.383358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:38.417528   61989 cri.go:89] found id: ""
	I0924 01:08:38.417556   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.417564   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:38.417571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:38.417619   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:38.459788   61989 cri.go:89] found id: ""
	I0924 01:08:38.459814   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.459821   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:38.459828   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:38.459883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:38.494017   61989 cri.go:89] found id: ""
	I0924 01:08:38.494050   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.494059   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:38.494065   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:38.494135   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:38.526894   61989 cri.go:89] found id: ""
	I0924 01:08:38.526924   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.526935   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:38.526942   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:38.527000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:38.563831   61989 cri.go:89] found id: ""
	I0924 01:08:38.563859   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.563876   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:38.563884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:38.563950   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:38.596066   61989 cri.go:89] found id: ""
	I0924 01:08:38.596095   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.596106   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:38.596114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:38.596172   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:38.630123   61989 cri.go:89] found id: ""
	I0924 01:08:38.630147   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.630157   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:38.630165   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:38.630223   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:38.664714   61989 cri.go:89] found id: ""
	I0924 01:08:38.664743   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.664754   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:38.664765   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:38.664782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:38.718770   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:38.718802   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:38.732878   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:38.732906   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:38.806441   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:38.806469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:38.806485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.884416   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:38.884456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:39.228048   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.228574   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:40.527354   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:42.528592   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.423782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:41.436827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:41.436899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:41.468283   61989 cri.go:89] found id: ""
	I0924 01:08:41.468316   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.468342   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:41.468353   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:41.468412   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:41.504348   61989 cri.go:89] found id: ""
	I0924 01:08:41.504380   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.504402   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:41.504410   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:41.504470   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:41.544785   61989 cri.go:89] found id: ""
	I0924 01:08:41.544809   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.544818   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:41.544825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:41.544883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:41.582924   61989 cri.go:89] found id: ""
	I0924 01:08:41.582954   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.582965   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:41.582973   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:41.583037   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:41.618220   61989 cri.go:89] found id: ""
	I0924 01:08:41.618243   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.618253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:41.618260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:41.618329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:41.653369   61989 cri.go:89] found id: ""
	I0924 01:08:41.653392   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.653400   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:41.653416   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:41.653477   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:41.687036   61989 cri.go:89] found id: ""
	I0924 01:08:41.687058   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.687069   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:41.687077   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:41.687133   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:41.720701   61989 cri.go:89] found id: ""
	I0924 01:08:41.720732   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.720744   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:41.720756   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:41.720776   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:41.798436   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:41.798486   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.842639   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:41.842674   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:41.893053   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:41.893086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:41.907757   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:41.907784   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:41.973466   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.474071   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:44.487057   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:44.487119   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:44.521772   61989 cri.go:89] found id: ""
	I0924 01:08:44.521813   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.521835   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:44.521843   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:44.521905   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:44.554928   61989 cri.go:89] found id: ""
	I0924 01:08:44.554956   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.554966   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:44.554977   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:44.555042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:44.594246   61989 cri.go:89] found id: ""
	I0924 01:08:44.594279   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.594292   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:44.594298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:44.594344   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:44.629779   61989 cri.go:89] found id: ""
	I0924 01:08:44.629807   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.629819   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:44.629827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:44.629884   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:44.671671   61989 cri.go:89] found id: ""
	I0924 01:08:44.671694   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.671701   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:44.671707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:44.671772   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:44.710875   61989 cri.go:89] found id: ""
	I0924 01:08:44.710910   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.710922   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:44.710931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:44.711000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:44.744345   61989 cri.go:89] found id: ""
	I0924 01:08:44.744381   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.744389   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:44.744395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:44.744442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:44.780771   61989 cri.go:89] found id: ""
	I0924 01:08:44.780797   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.780804   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:44.780812   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:44.780824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:44.834902   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:44.834958   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:44.848503   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:44.848540   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:44.923117   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.923138   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:44.923150   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:45.003806   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:45.003840   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.184585   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.282824063s)
	I0924 01:08:46.184659   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:46.201715   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:46.215637   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:46.228701   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:46.228726   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:46.228769   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:46.239005   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:46.239065   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:46.250336   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:46.259889   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:46.259961   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:46.271773   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.283106   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:46.283175   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.293325   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:46.306026   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:46.306111   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:46.318859   61323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:46.373819   61323 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:08:46.373973   61323 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:46.487006   61323 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:46.487146   61323 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:46.487299   61323 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:08:46.495557   61323 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:46.497537   61323 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:46.497645   61323 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:46.497732   61323 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:46.497853   61323 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:46.497946   61323 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:46.498041   61323 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:46.498116   61323 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:46.498197   61323 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:46.498280   61323 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:46.498389   61323 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:46.498490   61323 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:46.498547   61323 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:46.498623   61323 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:46.714556   61323 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:46.815030   61323 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:08:47.011082   61323 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:47.227052   61323 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:47.488776   61323 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:47.489403   61323 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:47.491864   61323 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:43.728646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:46.234412   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029064   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029109   61699 pod_ready.go:82] duration metric: took 4m0.007887151s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:45.029124   61699 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:08:45.029133   61699 pod_ready.go:39] duration metric: took 4m5.860472412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:45.029153   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:08:45.029189   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:45.029267   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:45.084875   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:45.084899   61699 cri.go:89] found id: ""
	I0924 01:08:45.084907   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:45.084955   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.089534   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:45.089601   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:45.133457   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:45.133479   61699 cri.go:89] found id: ""
	I0924 01:08:45.133486   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:45.133544   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.137523   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:45.137586   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:45.173989   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.174014   61699 cri.go:89] found id: ""
	I0924 01:08:45.174023   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:45.174083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.178084   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:45.178168   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:45.215763   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:45.215790   61699 cri.go:89] found id: ""
	I0924 01:08:45.215799   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:45.215851   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.220052   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:45.220137   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:45.258186   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.258206   61699 cri.go:89] found id: ""
	I0924 01:08:45.258213   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:45.258272   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.262402   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:45.262481   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:45.299355   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.299385   61699 cri.go:89] found id: ""
	I0924 01:08:45.299397   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:45.299452   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.303404   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:45.303505   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:45.341412   61699 cri.go:89] found id: ""
	I0924 01:08:45.341438   61699 logs.go:276] 0 containers: []
	W0924 01:08:45.341446   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:45.341452   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:45.341508   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:45.377419   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:45.377450   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:45.377457   61699 cri.go:89] found id: ""
	I0924 01:08:45.377471   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:45.377539   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.381497   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.385182   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:45.385201   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:45.455618   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:45.455661   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.495007   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:45.495037   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.530196   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:45.530230   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.581671   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:45.581709   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:46.122674   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:46.122717   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.169928   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:46.169965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:46.184617   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:46.184645   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:46.330482   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:46.330512   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:46.382927   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:46.382965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:46.441408   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:46.441442   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:46.484956   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:46.484985   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:46.522559   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:46.522595   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.064954   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:49.086621   61699 api_server.go:72] duration metric: took 4m15.650065328s to wait for apiserver process to appear ...
	I0924 01:08:49.086648   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:08:49.086695   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:49.086760   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:47.541843   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:47.555428   61989 kubeadm.go:597] duration metric: took 4m2.297219084s to restartPrimaryControlPlane
	W0924 01:08:47.555528   61989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:47.555560   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:49.123410   61989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.567825503s)
	I0924 01:08:49.123501   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:49.142686   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:49.154484   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:49.166734   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:49.166759   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:49.166813   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:49.178374   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:49.178517   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:49.188871   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:49.200190   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:49.200258   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:49.212895   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.225205   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:49.225276   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.237828   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:49.249686   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:49.249751   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:49.262505   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:49.338624   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:08:49.338712   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:49.509271   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:49.509489   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:49.509636   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:08:49.724434   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:47.494323   61323 out.go:235]   - Booting up control plane ...
	I0924 01:08:47.494449   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:47.494527   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:47.494904   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:47.511889   61323 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:47.518272   61323 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:47.518343   61323 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:47.654121   61323 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:08:47.654273   61323 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:08:48.156008   61323 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075879ms
	I0924 01:08:48.156089   61323 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:08:49.726458   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:49.726563   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:49.726639   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:49.726737   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:49.726812   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:49.727078   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:49.727375   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:49.728123   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:49.729254   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:49.730178   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:49.732548   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:49.732604   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:49.732676   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:49.938623   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:50.774207   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:51.022535   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:51.148690   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:51.168786   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:51.170070   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:51.170150   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:51.342671   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:48.729168   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:50.729197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:52.729615   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:53.660805   61323 kubeadm.go:310] [api-check] The API server is healthy after 5.502700892s
	I0924 01:08:53.678006   61323 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:08:53.693676   61323 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:08:53.736910   61323 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:08:53.737186   61323 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-650507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:08:53.750738   61323 kubeadm.go:310] [bootstrap-token] Using token: 62empn.zvptxpa69xtioeo1
	I0924 01:08:49.139835   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.139859   61699 cri.go:89] found id: ""
	I0924 01:08:49.139869   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:49.139920   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.144770   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:49.144896   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:49.193710   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:49.193733   61699 cri.go:89] found id: ""
	I0924 01:08:49.193743   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:49.193798   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.198090   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:49.198178   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:49.240236   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:49.240309   61699 cri.go:89] found id: ""
	I0924 01:08:49.240344   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:49.240401   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.244573   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:49.244646   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:49.290954   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:49.290998   61699 cri.go:89] found id: ""
	I0924 01:08:49.291008   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:49.291083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.295602   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:49.295664   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:49.340871   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.340894   61699 cri.go:89] found id: ""
	I0924 01:08:49.340904   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:49.340964   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.345362   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:49.345433   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:49.387382   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.387408   61699 cri.go:89] found id: ""
	I0924 01:08:49.387418   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:49.387472   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.393388   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:49.393468   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:49.436082   61699 cri.go:89] found id: ""
	I0924 01:08:49.436107   61699 logs.go:276] 0 containers: []
	W0924 01:08:49.436119   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:49.436126   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:49.436187   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:49.490172   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:49.490197   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.490203   61699 cri.go:89] found id: ""
	I0924 01:08:49.490213   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:49.490273   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.495438   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.500506   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:49.500537   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.561240   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:49.561277   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.611765   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:49.611807   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.689366   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:49.689413   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:49.747233   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:49.747271   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:49.852723   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:49.852771   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:50.006274   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:50.006322   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:50.064786   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:50.064828   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:50.104831   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:50.104865   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:50.144962   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:50.144990   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:50.183923   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:50.183956   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:50.222382   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:50.222414   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:50.671849   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:50.671890   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.187450   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:08:53.193094   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:08:53.194414   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:08:53.194439   61699 api_server.go:131] duration metric: took 4.107783011s to wait for apiserver health ...
	I0924 01:08:53.194449   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:08:53.194479   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:53.194546   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:53.232560   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:53.232584   61699 cri.go:89] found id: ""
	I0924 01:08:53.232594   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:53.232649   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.236611   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:53.236671   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:53.278194   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.278223   61699 cri.go:89] found id: ""
	I0924 01:08:53.278233   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:53.278291   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.283330   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:53.283391   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:53.322371   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.322399   61699 cri.go:89] found id: ""
	I0924 01:08:53.322408   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:53.322459   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.326794   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:53.326869   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:53.364035   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.364064   61699 cri.go:89] found id: ""
	I0924 01:08:53.364075   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:53.364140   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.368065   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:53.368127   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:53.405651   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.405679   61699 cri.go:89] found id: ""
	I0924 01:08:53.405687   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:53.405754   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.410451   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:53.410537   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:53.451079   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:53.451111   61699 cri.go:89] found id: ""
	I0924 01:08:53.451121   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:53.451183   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.456272   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:53.456367   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:53.497323   61699 cri.go:89] found id: ""
	I0924 01:08:53.497360   61699 logs.go:276] 0 containers: []
	W0924 01:08:53.497373   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:53.497387   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:53.497461   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:53.536017   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:53.536040   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:53.536046   61699 cri.go:89] found id: ""
	I0924 01:08:53.536055   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:53.536114   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.542413   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.546559   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:53.546592   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.560292   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:53.560323   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:53.684820   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:53.684849   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.734483   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:53.734519   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.780676   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:53.780705   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:53.855917   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:53.855960   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.906926   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:53.906962   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.953992   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:53.954019   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:54.031302   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:54.031350   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:54.073918   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:54.073958   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:54.108724   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:54.108765   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:53.752460   61323 out.go:235]   - Configuring RBAC rules ...
	I0924 01:08:53.752626   61323 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:08:53.758889   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:08:53.767101   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:08:53.770943   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:08:53.775335   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:08:53.792963   61323 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:08:54.070193   61323 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:08:54.526226   61323 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:08:55.069668   61323 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:08:55.070678   61323 kubeadm.go:310] 
	I0924 01:08:55.070751   61323 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:08:55.070761   61323 kubeadm.go:310] 
	I0924 01:08:55.070844   61323 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:08:55.070860   61323 kubeadm.go:310] 
	I0924 01:08:55.070910   61323 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:08:55.070998   61323 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:08:55.071064   61323 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:08:55.071074   61323 kubeadm.go:310] 
	I0924 01:08:55.071138   61323 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:08:55.071159   61323 kubeadm.go:310] 
	I0924 01:08:55.071210   61323 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:08:55.071217   61323 kubeadm.go:310] 
	I0924 01:08:55.071298   61323 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:08:55.071428   61323 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:08:55.071525   61323 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:08:55.071535   61323 kubeadm.go:310] 
	I0924 01:08:55.071640   61323 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:08:55.071721   61323 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:08:55.071738   61323 kubeadm.go:310] 
	I0924 01:08:55.071815   61323 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.071941   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:08:55.071971   61323 kubeadm.go:310] 	--control-plane 
	I0924 01:08:55.071984   61323 kubeadm.go:310] 
	I0924 01:08:55.072089   61323 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:08:55.072098   61323 kubeadm.go:310] 
	I0924 01:08:55.072198   61323 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.072324   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:08:55.073807   61323 kubeadm.go:310] W0924 01:08:46.350959    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074118   61323 kubeadm.go:310] W0924 01:08:46.352529    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074256   61323 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:08:55.074295   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:08:55.074312   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:08:55.076241   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:08:55.077630   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:08:55.088658   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:08:55.106396   61323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:08:55.106491   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.106579   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-650507 minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=embed-certs-650507 minikube.k8s.io/primary=true
	I0924 01:08:55.138376   61323 ops.go:34] apiserver oom_adj: -16
	I0924 01:08:51.344458   61989 out.go:235]   - Booting up control plane ...
	I0924 01:08:51.344607   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:51.353468   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:51.356949   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:51.358082   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:51.364468   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:08:54.501805   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:54.501847   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:54.548768   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:54.548800   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:57.105661   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:08:57.105688   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.105693   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.105697   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.105703   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.105706   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.105709   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.105715   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.105722   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.105729   61699 system_pods.go:74] duration metric: took 3.911274774s to wait for pod list to return data ...
	I0924 01:08:57.105736   61699 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:08:57.108031   61699 default_sa.go:45] found service account: "default"
	I0924 01:08:57.108051   61699 default_sa.go:55] duration metric: took 2.307712ms for default service account to be created ...
	I0924 01:08:57.108059   61699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:08:57.112551   61699 system_pods.go:86] 8 kube-system pods found
	I0924 01:08:57.112578   61699 system_pods.go:89] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.112584   61699 system_pods.go:89] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.112589   61699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.112593   61699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.112597   61699 system_pods.go:89] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.112600   61699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.112608   61699 system_pods.go:89] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.112613   61699 system_pods.go:89] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.112619   61699 system_pods.go:126] duration metric: took 4.555185ms to wait for k8s-apps to be running ...
	I0924 01:08:57.112625   61699 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:08:57.112665   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:57.127805   61699 system_svc.go:56] duration metric: took 15.170368ms WaitForService to wait for kubelet
	I0924 01:08:57.127839   61699 kubeadm.go:582] duration metric: took 4m23.691287248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:08:57.127865   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:08:57.130964   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:08:57.130994   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:08:57.131008   61699 node_conditions.go:105] duration metric: took 3.13793ms to run NodePressure ...
	I0924 01:08:57.131021   61699 start.go:241] waiting for startup goroutines ...
	I0924 01:08:57.131029   61699 start.go:246] waiting for cluster config update ...
	I0924 01:08:57.131043   61699 start.go:255] writing updated cluster config ...
	I0924 01:08:57.131388   61699 ssh_runner.go:195] Run: rm -f paused
	I0924 01:08:57.182238   61699 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:08:57.185023   61699 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-465341" cluster and "default" namespace by default
	I0924 01:08:55.229370   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:57.729448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:55.285390   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.785813   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.285570   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.785779   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.285599   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.786401   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.285583   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.786037   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.286404   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.447075   61323 kubeadm.go:1113] duration metric: took 4.340646509s to wait for elevateKubeSystemPrivileges
	I0924 01:08:59.447119   61323 kubeadm.go:394] duration metric: took 4m57.777127509s to StartCluster
	I0924 01:08:59.447141   61323 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.447229   61323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:08:59.449766   61323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.450091   61323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:08:59.450191   61323 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:08:59.450308   61323 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650507"
	I0924 01:08:59.450330   61323 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-650507"
	W0924 01:08:59.450343   61323 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:08:59.450346   61323 addons.go:69] Setting metrics-server=true in profile "embed-certs-650507"
	I0924 01:08:59.450349   61323 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650507"
	I0924 01:08:59.450366   61323 addons.go:234] Setting addon metrics-server=true in "embed-certs-650507"
	W0924 01:08:59.450374   61323 addons.go:243] addon metrics-server should already be in state true
	I0924 01:08:59.450328   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:08:59.450381   61323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650507"
	I0924 01:08:59.450404   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450375   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450718   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450770   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450805   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450808   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450895   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450842   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.451862   61323 out.go:177] * Verifying Kubernetes components...
	I0924 01:08:59.453214   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:08:59.471878   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0924 01:08:59.472083   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0924 01:08:59.472239   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0924 01:08:59.472586   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472704   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472988   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.473187   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473205   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473226   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473242   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473418   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473433   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474003   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.474116   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474383   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474422   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.474591   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474628   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.478726   61323 addons.go:234] Setting addon default-storageclass=true in "embed-certs-650507"
	W0924 01:08:59.478753   61323 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:08:59.478784   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.479137   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.479186   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.495021   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I0924 01:08:59.495527   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.496068   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.496090   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.496519   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.496734   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.498472   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.498528   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0924 01:08:59.498971   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.499485   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.499498   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.499794   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.499918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.500899   61323 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:08:59.501731   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.502154   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:08:59.502172   61323 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:08:59.502186   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.503238   61323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:08:59.504765   61323 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.504783   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:08:59.504801   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.505483   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0924 01:08:59.505882   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.506386   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.506408   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.506841   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.507463   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.507505   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.511098   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511611   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.511645   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511944   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.512127   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.512296   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.512493   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.514595   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515144   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.515173   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515481   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.515749   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.515963   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.516100   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.529920   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0924 01:08:59.530565   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.531102   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.531125   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.531629   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.531918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.533741   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.533992   61323 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.534007   61323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:08:59.534026   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.537032   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537488   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.537515   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537713   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.537919   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.538074   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.538198   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.680683   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:08:59.711414   61323 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721234   61323 node_ready.go:49] node "embed-certs-650507" has status "Ready":"True"
	I0924 01:08:59.721264   61323 node_ready.go:38] duration metric: took 9.820004ms for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721275   61323 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:59.736353   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:08:59.831004   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:08:59.831041   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:08:59.871681   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.873844   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.902662   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:08:59.902691   61323 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:08:59.956425   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:08:59.956454   61323 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:08:59.997902   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:09:01.146340   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.27245536s)
	I0924 01:09:01.146470   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146505   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146403   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.274685832s)
	I0924 01:09:01.146582   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146602   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146819   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146848   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146868   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146875   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.146882   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146892   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146967   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146990   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147007   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.147023   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.147084   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.147117   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147133   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147370   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147378   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180574   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.180604   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.180929   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180977   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.180986   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.207538   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209569759s)
	I0924 01:09:01.207600   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.207616   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.207959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.208002   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208011   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208019   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.208028   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.208377   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208402   61323 addons.go:475] Verifying addon metrics-server=true in "embed-certs-650507"
	I0924 01:09:01.208411   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.210500   61323 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:08:59.731184   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:02.229737   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:01.211900   61323 addons.go:510] duration metric: took 1.761718139s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:09:01.751463   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.242260   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.728708   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.728878   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.243002   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:06.243030   61323 pod_ready.go:82] duration metric: took 6.506649267s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:06.243039   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:08.249949   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:09.750009   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.750037   61323 pod_ready.go:82] duration metric: took 3.506990291s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.750049   61323 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756600   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.756626   61323 pod_ready.go:82] duration metric: took 6.570047ms for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756635   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762209   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.762235   61323 pod_ready.go:82] duration metric: took 5.593257ms for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762248   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772052   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.772075   61323 pod_ready.go:82] duration metric: took 9.818627ms for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772088   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777733   61323 pod_ready.go:93] pod "kube-proxy-mwtkg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.777765   61323 pod_ready.go:82] duration metric: took 5.669609ms for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777778   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146804   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:10.146833   61323 pod_ready.go:82] duration metric: took 369.046066ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146844   61323 pod_ready.go:39] duration metric: took 10.425557831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:09:10.146861   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:09:10.146918   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:09:10.162335   61323 api_server.go:72] duration metric: took 10.712204486s to wait for apiserver process to appear ...
	I0924 01:09:10.162360   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:09:10.162381   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:09:10.166693   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:09:10.167700   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:09:10.167723   61323 api_server.go:131] duration metric: took 5.355716ms to wait for apiserver health ...
	I0924 01:09:10.167734   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:09:10.351584   61323 system_pods.go:59] 9 kube-system pods found
	I0924 01:09:10.351621   61323 system_pods.go:61] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.351629   61323 system_pods.go:61] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.351634   61323 system_pods.go:61] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.351640   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.351645   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.351650   61323 system_pods.go:61] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.351655   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.351669   61323 system_pods.go:61] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.351678   61323 system_pods.go:61] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.351692   61323 system_pods.go:74] duration metric: took 183.950994ms to wait for pod list to return data ...
	I0924 01:09:10.351704   61323 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:09:10.547564   61323 default_sa.go:45] found service account: "default"
	I0924 01:09:10.547595   61323 default_sa.go:55] duration metric: took 195.882659ms for default service account to be created ...
	I0924 01:09:10.547605   61323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:09:10.750290   61323 system_pods.go:86] 9 kube-system pods found
	I0924 01:09:10.750327   61323 system_pods.go:89] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.750336   61323 system_pods.go:89] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.750344   61323 system_pods.go:89] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.750352   61323 system_pods.go:89] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.750359   61323 system_pods.go:89] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.750366   61323 system_pods.go:89] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.750372   61323 system_pods.go:89] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.750382   61323 system_pods.go:89] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.750391   61323 system_pods.go:89] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.750407   61323 system_pods.go:126] duration metric: took 202.795975ms to wait for k8s-apps to be running ...
	I0924 01:09:10.750416   61323 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:09:10.750476   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:09:10.765539   61323 system_svc.go:56] duration metric: took 15.112281ms WaitForService to wait for kubelet
	I0924 01:09:10.765569   61323 kubeadm.go:582] duration metric: took 11.31544538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:09:10.765588   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:09:10.947628   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:09:10.947654   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:09:10.947664   61323 node_conditions.go:105] duration metric: took 182.072269ms to run NodePressure ...
	I0924 01:09:10.947674   61323 start.go:241] waiting for startup goroutines ...
	I0924 01:09:10.947681   61323 start.go:246] waiting for cluster config update ...
	I0924 01:09:10.947691   61323 start.go:255] writing updated cluster config ...
	I0924 01:09:10.947955   61323 ssh_runner.go:195] Run: rm -f paused
	I0924 01:09:10.999208   61323 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:09:11.001392   61323 out.go:177] * Done! kubectl is now configured to use "embed-certs-650507" cluster and "default" namespace by default
	I0924 01:09:08.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:11.229036   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:13.727852   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:16.229362   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:18.727643   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:20.729183   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:22.731323   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:25.228514   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:27.728747   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:29.729150   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:32.228197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:31.365725   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:09:31.366444   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:31.366704   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:34.729441   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:37.228766   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:36.367209   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:36.367654   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:39.728035   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:41.729148   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:43.729240   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.228006   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:48.228134   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.367945   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:46.368128   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:50.228455   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:52.228646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:54.229158   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:56.727712   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:58.728522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:00.728964   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:02.729909   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:05.227781   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:07.228729   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:06.368912   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:06.369182   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:09.728977   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:10.222284   61070 pod_ready.go:82] duration metric: took 4m0.000274516s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:10:10.222354   61070 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:10:10.222381   61070 pod_ready.go:39] duration metric: took 4m12.043944079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:10.222412   61070 kubeadm.go:597] duration metric: took 4m56.454037737s to restartPrimaryControlPlane
	W0924 01:10:10.222488   61070 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:10:10.222536   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:36.533302   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.310734731s)
	I0924 01:10:36.533377   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:36.556961   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:10:36.568298   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:36.584098   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:36.584121   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:36.584178   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:36.594153   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:36.594218   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:36.612646   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:36.626433   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:36.626506   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:36.636161   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.654017   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:36.654075   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.663760   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:36.673737   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:36.673799   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:36.684005   61070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:36.731568   61070 kubeadm.go:310] W0924 01:10:36.713557    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.733592   61070 kubeadm.go:310] W0924 01:10:36.715675    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.850767   61070 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:45.349295   61070 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:10:45.349386   61070 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:45.349486   61070 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:45.349600   61070 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:45.349733   61070 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:10:45.349836   61070 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:45.351746   61070 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:45.351843   61070 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:45.351939   61070 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:45.352055   61070 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:45.352160   61070 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:45.352228   61070 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:45.352297   61070 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:45.352392   61070 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:45.352477   61070 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:45.352551   61070 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:45.352664   61070 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:45.352734   61070 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:45.352904   61070 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:45.352956   61070 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:45.353038   61070 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:10:45.353127   61070 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:45.353209   61070 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:45.353300   61070 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:45.353372   61070 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:45.353446   61070 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.354948   61070 out.go:235]   - Booting up control plane ...
	I0924 01:10:45.355023   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:45.355090   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:45.355144   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:45.355226   61070 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:45.355310   61070 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:45.355356   61070 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:45.355476   61070 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:10:45.355585   61070 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:10:45.355658   61070 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537437s
	I0924 01:10:45.355728   61070 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:10:45.355807   61070 kubeadm.go:310] [api-check] The API server is healthy after 5.002387582s
	I0924 01:10:45.355955   61070 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:10:45.356129   61070 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:10:45.356230   61070 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:10:45.356516   61070 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-674057 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:10:45.356571   61070 kubeadm.go:310] [bootstrap-token] Using token: g2v97n.iz49hjb4wh5cfkiq
	I0924 01:10:45.358203   61070 out.go:235]   - Configuring RBAC rules ...
	I0924 01:10:45.358333   61070 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:10:45.358439   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:10:45.358562   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:10:45.358667   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:10:45.358773   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:10:45.358851   61070 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:10:45.358997   61070 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:10:45.359059   61070 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:10:45.359101   61070 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:10:45.359111   61070 kubeadm.go:310] 
	I0924 01:10:45.359164   61070 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:10:45.359171   61070 kubeadm.go:310] 
	I0924 01:10:45.359263   61070 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:10:45.359280   61070 kubeadm.go:310] 
	I0924 01:10:45.359309   61070 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:10:45.359387   61070 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:10:45.359458   61070 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:10:45.359471   61070 kubeadm.go:310] 
	I0924 01:10:45.359559   61070 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:10:45.359568   61070 kubeadm.go:310] 
	I0924 01:10:45.359613   61070 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:10:45.359619   61070 kubeadm.go:310] 
	I0924 01:10:45.359665   61070 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:10:45.359728   61070 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:10:45.359800   61070 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:10:45.359813   61070 kubeadm.go:310] 
	I0924 01:10:45.359879   61070 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:10:45.359978   61070 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:10:45.359996   61070 kubeadm.go:310] 
	I0924 01:10:45.360101   61070 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360218   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:10:45.360251   61070 kubeadm.go:310] 	--control-plane 
	I0924 01:10:45.360258   61070 kubeadm.go:310] 
	I0924 01:10:45.360453   61070 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:10:45.360481   61070 kubeadm.go:310] 
	I0924 01:10:45.360595   61070 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360693   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:10:45.360706   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:10:45.360713   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:10:45.362153   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:10:46.371109   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:46.371309   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371318   61989 kubeadm.go:310] 
	I0924 01:10:46.371352   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:10:46.371455   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:10:46.371490   61989 kubeadm.go:310] 
	I0924 01:10:46.371546   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:10:46.371592   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:10:46.371734   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:10:46.371751   61989 kubeadm.go:310] 
	I0924 01:10:46.371888   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:10:46.371936   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:10:46.371978   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:10:46.371988   61989 kubeadm.go:310] 
	I0924 01:10:46.372124   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:10:46.372253   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:10:46.372262   61989 kubeadm.go:310] 
	I0924 01:10:46.372442   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:10:46.372578   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:10:46.372680   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:10:46.372756   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:10:46.372765   61989 kubeadm.go:310] 
	I0924 01:10:46.373578   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:46.373675   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:10:46.373790   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 01:10:46.373938   61989 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 01:10:46.373987   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:46.834432   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:46.851214   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:46.862648   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:46.862675   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:46.862733   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:46.873005   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:46.873073   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:46.884007   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:46.893944   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:46.894016   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:46.905036   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.914953   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:46.915024   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.924881   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:46.935132   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:46.935192   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:46.945837   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:47.018713   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:10:47.018861   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:47.159920   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:47.160042   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:47.160168   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:10:47.349360   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:47.351645   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:47.351763   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:47.351918   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:47.352040   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:47.352118   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:47.352205   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:47.352298   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:47.352401   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:47.352481   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:47.352574   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:47.352662   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:47.352705   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:47.352767   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:47.467301   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:47.622085   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:47.726807   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:47.951249   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:47.973392   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:47.974396   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:47.974440   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:48.127629   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.363348   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:10:45.374505   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:10:45.391838   61070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:10:45.391947   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:45.391999   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-674057 minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=no-preload-674057 minikube.k8s.io/primary=true
	I0924 01:10:45.583482   61070 ops.go:34] apiserver oom_adj: -16
	I0924 01:10:45.583498   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.083831   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.583990   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.084184   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.583925   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.083775   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.583654   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.084305   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.584636   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.084620   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.226320   61070 kubeadm.go:1113] duration metric: took 4.834429832s to wait for elevateKubeSystemPrivileges
	I0924 01:10:50.226363   61070 kubeadm.go:394] duration metric: took 5m36.514145334s to StartCluster
	I0924 01:10:50.226386   61070 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.226480   61070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:10:50.229196   61070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.229530   61070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:10:50.229600   61070 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:10:50.229703   61070 addons.go:69] Setting storage-provisioner=true in profile "no-preload-674057"
	I0924 01:10:50.229725   61070 addons.go:234] Setting addon storage-provisioner=true in "no-preload-674057"
	W0924 01:10:50.229733   61070 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:10:50.229735   61070 addons.go:69] Setting default-storageclass=true in profile "no-preload-674057"
	I0924 01:10:50.229756   61070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-674057"
	I0924 01:10:50.229764   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.229789   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:10:50.229781   61070 addons.go:69] Setting metrics-server=true in profile "no-preload-674057"
	I0924 01:10:50.229847   61070 addons.go:234] Setting addon metrics-server=true in "no-preload-674057"
	W0924 01:10:50.229855   61070 addons.go:243] addon metrics-server should already be in state true
	I0924 01:10:50.229871   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.230228   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230268   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230320   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230351   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230355   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230389   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.231531   61070 out.go:177] * Verifying Kubernetes components...
	I0924 01:10:50.233506   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:10:50.252485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0924 01:10:50.252844   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0924 01:10:50.253068   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.253217   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0924 01:10:50.253653   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.253676   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.253705   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254050   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254203   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254236   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254250   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.254591   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254814   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.254829   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254851   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.254864   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.254984   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.255389   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.255983   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.256028   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.258757   61070 addons.go:234] Setting addon default-storageclass=true in "no-preload-674057"
	W0924 01:10:50.258781   61070 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:10:50.258861   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.259186   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.259237   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.276636   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0924 01:10:50.276806   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0924 01:10:50.277196   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277312   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277771   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.277795   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278022   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.278044   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278213   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278380   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.278485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0924 01:10:50.278806   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278877   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.279106   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.279244   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.279260   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.279668   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.280215   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.280263   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.280315   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.281796   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.282123   61070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:10:50.283561   61070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:10:48.129312   61989 out.go:235]   - Booting up control plane ...
	I0924 01:10:48.129446   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:48.139821   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:48.143120   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:48.144038   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:48.146275   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:10:50.283656   61070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.283674   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:10:50.283688   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.284778   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:10:50.284793   61070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:10:50.284811   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.288106   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288477   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.288498   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288524   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288679   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.288867   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289019   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.289185   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.289309   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.289338   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.289613   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.289773   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289938   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.290073   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.323722   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0924 01:10:50.324221   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.324873   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.324901   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.325334   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.325572   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.327779   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.328071   61070 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.328092   61070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:10:50.328119   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.331721   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.331988   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.332022   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.332209   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.332455   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.332658   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.332837   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.471507   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:10:50.502289   61070 node_ready.go:35] waiting up to 6m0s for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522752   61070 node_ready.go:49] node "no-preload-674057" has status "Ready":"True"
	I0924 01:10:50.522784   61070 node_ready.go:38] duration metric: took 20.46398ms for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522797   61070 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:50.537297   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:50.576703   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.638655   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:10:50.638679   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:10:50.673535   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.691443   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:10:50.691475   61070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:10:50.791572   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:50.791596   61070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:10:50.887143   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:51.535179   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535211   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535247   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535269   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535531   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535553   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535563   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535571   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535572   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535584   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535591   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535598   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535809   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535830   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.536069   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.536078   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.536088   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.563511   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.563537   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.563856   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.563880   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.800860   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.800889   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801192   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801211   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801224   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.801233   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801527   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.801559   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801567   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801577   61070 addons.go:475] Verifying addon metrics-server=true in "no-preload-674057"
	I0924 01:10:51.803735   61070 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:10:51.805581   61070 addons.go:510] duration metric: took 1.575985263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:10:52.544028   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:53.564056   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.564089   61070 pod_ready.go:82] duration metric: took 3.026767371s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.564102   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573039   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.573076   61070 pod_ready.go:82] duration metric: took 8.965144ms for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573090   61070 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081080   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.081105   61070 pod_ready.go:82] duration metric: took 508.007072ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081115   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087054   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.087079   61070 pod_ready.go:82] duration metric: took 5.957569ms for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087091   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094018   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.094043   61070 pod_ready.go:82] duration metric: took 6.944048ms for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094053   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341307   61070 pod_ready.go:93] pod "kube-proxy-k54d7" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.341326   61070 pod_ready.go:82] duration metric: took 247.267987ms for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341335   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741702   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.741732   61070 pod_ready.go:82] duration metric: took 400.389532ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741742   61070 pod_ready.go:39] duration metric: took 4.218931841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:54.741759   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:10:54.741827   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:10:54.758692   61070 api_server.go:72] duration metric: took 4.529120201s to wait for apiserver process to appear ...
	I0924 01:10:54.758723   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:10:54.758744   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:10:54.764587   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:10:54.765620   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:10:54.765639   61070 api_server.go:131] duration metric: took 6.909845ms to wait for apiserver health ...
	I0924 01:10:54.765646   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:10:54.945080   61070 system_pods.go:59] 9 kube-system pods found
	I0924 01:10:54.945121   61070 system_pods.go:61] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:54.945128   61070 system_pods.go:61] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:54.945134   61070 system_pods.go:61] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:54.945140   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:54.945145   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:54.945150   61070 system_pods.go:61] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:54.945161   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:54.945172   61070 system_pods.go:61] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:54.945180   61070 system_pods.go:61] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:54.945191   61070 system_pods.go:74] duration metric: took 179.539019ms to wait for pod list to return data ...
	I0924 01:10:54.945205   61070 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:10:55.141944   61070 default_sa.go:45] found service account: "default"
	I0924 01:10:55.141973   61070 default_sa.go:55] duration metric: took 196.760922ms for default service account to be created ...
	I0924 01:10:55.141984   61070 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:10:55.344235   61070 system_pods.go:86] 9 kube-system pods found
	I0924 01:10:55.344273   61070 system_pods.go:89] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:55.344282   61070 system_pods.go:89] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:55.344288   61070 system_pods.go:89] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:55.344293   61070 system_pods.go:89] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:55.344297   61070 system_pods.go:89] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:55.344301   61070 system_pods.go:89] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:55.344304   61070 system_pods.go:89] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:55.344310   61070 system_pods.go:89] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:55.344315   61070 system_pods.go:89] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:55.344324   61070 system_pods.go:126] duration metric: took 202.334823ms to wait for k8s-apps to be running ...
	I0924 01:10:55.344361   61070 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:10:55.344406   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:55.361050   61070 system_svc.go:56] duration metric: took 16.6812ms WaitForService to wait for kubelet
	I0924 01:10:55.361082   61070 kubeadm.go:582] duration metric: took 5.13151522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:10:55.361104   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:10:55.541766   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:10:55.541799   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:10:55.541812   61070 node_conditions.go:105] duration metric: took 180.702708ms to run NodePressure ...
	I0924 01:10:55.541826   61070 start.go:241] waiting for startup goroutines ...
	I0924 01:10:55.541837   61070 start.go:246] waiting for cluster config update ...
	I0924 01:10:55.541850   61070 start.go:255] writing updated cluster config ...
	I0924 01:10:55.542100   61070 ssh_runner.go:195] Run: rm -f paused
	I0924 01:10:55.590629   61070 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:10:55.592850   61070 out.go:177] * Done! kubectl is now configured to use "no-preload-674057" cluster and "default" namespace by default
	I0924 01:11:28.148929   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:11:28.149086   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:28.149360   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:33.150102   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:33.150283   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:43.151281   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:43.151540   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:03.152338   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:03.152562   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151221   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:43.151503   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151532   61989 kubeadm.go:310] 
	I0924 01:12:43.151585   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:12:43.151645   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:12:43.151655   61989 kubeadm.go:310] 
	I0924 01:12:43.151729   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:12:43.151779   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:12:43.151940   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:12:43.151954   61989 kubeadm.go:310] 
	I0924 01:12:43.152095   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:12:43.152154   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:12:43.152201   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:12:43.152207   61989 kubeadm.go:310] 
	I0924 01:12:43.152294   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:12:43.152411   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:12:43.152424   61989 kubeadm.go:310] 
	I0924 01:12:43.152565   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:12:43.152653   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:12:43.152718   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:12:43.152794   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:12:43.152802   61989 kubeadm.go:310] 
	I0924 01:12:43.153600   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:12:43.153699   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:12:43.153757   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 01:12:43.153808   61989 kubeadm.go:394] duration metric: took 7m57.944266289s to StartCluster
	I0924 01:12:43.153845   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:12:43.153894   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:12:43.199866   61989 cri.go:89] found id: ""
	I0924 01:12:43.199896   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.199908   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:12:43.199916   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:12:43.199975   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:12:43.235387   61989 cri.go:89] found id: ""
	I0924 01:12:43.235420   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.235432   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:12:43.235441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:12:43.235513   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:12:43.271255   61989 cri.go:89] found id: ""
	I0924 01:12:43.271290   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.271303   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:12:43.271312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:12:43.271380   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:12:43.305842   61989 cri.go:89] found id: ""
	I0924 01:12:43.305870   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.305882   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:12:43.305891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:12:43.305952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:12:43.341956   61989 cri.go:89] found id: ""
	I0924 01:12:43.341983   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.342005   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:12:43.342013   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:12:43.342093   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:12:43.376362   61989 cri.go:89] found id: ""
	I0924 01:12:43.376399   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.376421   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:12:43.376431   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:12:43.376487   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:12:43.409351   61989 cri.go:89] found id: ""
	I0924 01:12:43.409378   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.409387   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:12:43.409392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:12:43.409459   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:12:43.442446   61989 cri.go:89] found id: ""
	I0924 01:12:43.442479   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.442487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:12:43.442497   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:12:43.442507   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:12:43.498980   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:12:43.499020   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:12:43.520090   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:12:43.520120   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:12:43.612212   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:12:43.612242   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:12:43.612255   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:12:43.727355   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:12:43.727395   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 01:12:43.770163   61989 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 01:12:43.770217   61989 out.go:270] * 
	W0924 01:12:43.770282   61989 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.770297   61989 out.go:270] * 
	W0924 01:12:43.771298   61989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:12:43.775708   61989 out.go:201] 
	W0924 01:12:43.777139   61989 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.777186   61989 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 01:12:43.777214   61989 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 01:12:43.779580   61989 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.268165333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140909268137548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14dc796f-1179-4747-8cf8-49386eafe3bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.268661588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac6d3b8a-bce8-416e-b532-091495b896d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.268714337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac6d3b8a-bce8-416e-b532-091495b896d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.268746359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ac6d3b8a-bce8-416e-b532-091495b896d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.315868122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40e78b28-8b33-4dad-bb07-80fb3ea15d1c name=/runtime.v1.RuntimeService/Version
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.315954906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40e78b28-8b33-4dad-bb07-80fb3ea15d1c name=/runtime.v1.RuntimeService/Version
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.317262040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b1c05fe-1eaf-4953-8f96-fc44948ae30b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.317888276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140909317850013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b1c05fe-1eaf-4953-8f96-fc44948ae30b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.319032266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e03e14fc-ee74-494d-8fcd-d3b6c032f2cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.319083476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e03e14fc-ee74-494d-8fcd-d3b6c032f2cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.319124496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e03e14fc-ee74-494d-8fcd-d3b6c032f2cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.352567726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=341c7311-1856-4065-91a2-637fcf02a1df name=/runtime.v1.RuntimeService/Version
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.352668172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=341c7311-1856-4065-91a2-637fcf02a1df name=/runtime.v1.RuntimeService/Version
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.353742734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc19047f-3e5e-47e7-9cf4-d0b983d90bc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.354228647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140909354192430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc19047f-3e5e-47e7-9cf4-d0b983d90bc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.354886945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bee712b-fac4-4133-83cb-c0f91183eada name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.354933487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bee712b-fac4-4133-83cb-c0f91183eada name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.354972788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8bee712b-fac4-4133-83cb-c0f91183eada name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.390552880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d05d9f54-bf9f-4d58-88ec-f0ad8026ac09 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.390639877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d05d9f54-bf9f-4d58-88ec-f0ad8026ac09 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.392177022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3a5c55d-5067-4efe-bf4a-235f2d758497 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.392654904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727140909392628821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3a5c55d-5067-4efe-bf4a-235f2d758497 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.393216773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc16c099-73d0-44e7-b32d-83fe6a635e08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.393293406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc16c099-73d0-44e7-b32d-83fe6a635e08 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:21:49 old-k8s-version-171598 crio[631]: time="2024-09-24 01:21:49.393380339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cc16c099-73d0-44e7-b32d-83fe6a635e08 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep24 01:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051965] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048547] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.882363] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935977] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544938] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.695614] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.066394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068035] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.210501] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.125361] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.257875] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.688915] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.058357] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.792508] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +11.354084] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 01:08] systemd-fstab-generator[5046]: Ignoring "noauto" option for root device
	[Sep24 01:10] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.074932] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:21:49 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-171598 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: net/http.(*Transport).dial(0xc00010a000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000be99b0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: net/http.(*Transport).dialConn(0xc00010a000, 0x4f7fe00, 0xc000052030, 0x0, 0xc0009ec180, 0x5, 0xc000be99b0, 0x24, 0x0, 0xc000bd5200, ...)
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: net/http.(*Transport).dialConnFor(0xc00010a000, 0xc0005ccf20)
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: created by net/http.(*Transport).queueForDial
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: goroutine 155 [select]:
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c1b5c0, 0xc000ba2300, 0xc000243440, 0xc0002433e0)
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: created by net.(*netFD).connect
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: goroutine 119 [runnable]:
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000a149b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000cf6f00, 0x0, 0x0)
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0007c08c0)
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 24 01:21:49 old-k8s-version-171598 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 24 01:21:49 old-k8s-version-171598 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 24 01:21:49 old-k8s-version-171598 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (233.617149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-171598" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (511.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-24 01:26:31.15879566 +0000 UTC m=+6532.589878117
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-465341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.084µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-465341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-465341 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-465341 logs -n 25: (1.421461089s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:24 UTC | 24 Sep 24 01:24 UTC |
	| start   | -p newest-cni-185978 --memory=2200 --alsologtostderr   | newest-cni-185978            | jenkins | v1.34.0 | 24 Sep 24 01:24 UTC | 24 Sep 24 01:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:25 UTC |
	| start   | -p auto-447054 --memory=3072                           | auto-447054                  | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:26 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:25 UTC |
	| start   | -p kindnet-447054                                      | kindnet-447054               | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:26 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-185978             | newest-cni-185978            | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-185978                                   | newest-cni-185978            | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-185978                  | newest-cni-185978            | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-185978 --memory=2200 --alsologtostderr   | newest-cni-185978            | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p auto-447054 pgrep -a                                | auto-447054                  | jenkins | v1.34.0 | 24 Sep 24 01:26 UTC | 24 Sep 24 01:26 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:25:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:25:39.759468   70464 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:25:39.759646   70464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:25:39.759658   70464 out.go:358] Setting ErrFile to fd 2...
	I0924 01:25:39.759666   70464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:25:39.759987   70464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:25:39.760706   70464 out.go:352] Setting JSON to false
	I0924 01:25:39.761993   70464 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7684,"bootTime":1727133456,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:25:39.762088   70464 start.go:139] virtualization: kvm guest
	I0924 01:25:39.765367   70464 out.go:177] * [newest-cni-185978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:25:39.766969   70464 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:25:39.766998   70464 notify.go:220] Checking for updates...
	I0924 01:25:39.769614   70464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:25:39.770974   70464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:25:39.772279   70464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:25:39.773620   70464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:25:39.775198   70464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:25:39.777319   70464 config.go:182] Loaded profile config "newest-cni-185978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:25:39.777921   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:39.778003   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:39.799220   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38385
	I0924 01:25:39.799816   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:39.800526   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:25:39.800551   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:39.800944   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:39.801124   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:39.801378   70464 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:25:39.801800   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:39.801841   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:39.822917   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I0924 01:25:39.823463   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:39.824060   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:25:39.824089   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:39.824530   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:39.824702   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:39.867658   70464 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:25:39.869188   70464 start.go:297] selected driver: kvm2
	I0924 01:25:39.869206   70464 start.go:901] validating driver "kvm2" against &{Name:newest-cni-185978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-185978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:25:39.869355   70464 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:25:39.870331   70464 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:25:39.870419   70464 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:25:39.891724   70464 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:25:39.892266   70464 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0924 01:25:39.892303   70464 cni.go:84] Creating CNI manager for ""
	I0924 01:25:39.892377   70464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:25:39.892436   70464 start.go:340] cluster config:
	{Name:newest-cni-185978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-185978 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:25:39.892564   70464 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:25:39.894646   70464 out.go:177] * Starting "newest-cni-185978" primary control-plane node in "newest-cni-185978" cluster
	I0924 01:25:40.731195   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:40.731678   70020 main.go:141] libmachine: (kindnet-447054) DBG | unable to find current IP address of domain kindnet-447054 in network mk-kindnet-447054
	I0924 01:25:40.731704   70020 main.go:141] libmachine: (kindnet-447054) DBG | I0924 01:25:40.731636   70176 retry.go:31] will retry after 3.004005343s: waiting for machine to come up
	I0924 01:25:43.111428   69667 kubeadm.go:310] [api-check] The API server is healthy after 6.001628874s
	I0924 01:25:43.127980   69667 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:25:43.147464   69667 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:25:43.195715   69667 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:25:43.195991   69667 kubeadm.go:310] [mark-control-plane] Marking the node auto-447054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:25:43.213332   69667 kubeadm.go:310] [bootstrap-token] Using token: yka0qy.3xtltx860lll1y1s
	I0924 01:25:43.214880   69667 out.go:235]   - Configuring RBAC rules ...
	I0924 01:25:43.215007   69667 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:25:43.223346   69667 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:25:43.241138   69667 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:25:43.246018   69667 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:25:43.251593   69667 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:25:43.257797   69667 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:25:43.524962   69667 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:25:43.943744   69667 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:25:44.518995   69667 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:25:44.519902   69667 kubeadm.go:310] 
	I0924 01:25:44.519957   69667 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:25:44.519962   69667 kubeadm.go:310] 
	I0924 01:25:44.520053   69667 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:25:44.520065   69667 kubeadm.go:310] 
	I0924 01:25:44.520096   69667 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:25:44.520152   69667 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:25:44.520230   69667 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:25:44.520259   69667 kubeadm.go:310] 
	I0924 01:25:44.520353   69667 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:25:44.520364   69667 kubeadm.go:310] 
	I0924 01:25:44.520404   69667 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:25:44.520411   69667 kubeadm.go:310] 
	I0924 01:25:44.520453   69667 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:25:44.520520   69667 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:25:44.520577   69667 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:25:44.520583   69667 kubeadm.go:310] 
	I0924 01:25:44.520651   69667 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:25:44.520756   69667 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:25:44.520775   69667 kubeadm.go:310] 
	I0924 01:25:44.520897   69667 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yka0qy.3xtltx860lll1y1s \
	I0924 01:25:44.520985   69667 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:25:44.521004   69667 kubeadm.go:310] 	--control-plane 
	I0924 01:25:44.521010   69667 kubeadm.go:310] 
	I0924 01:25:44.521083   69667 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:25:44.521089   69667 kubeadm.go:310] 
	I0924 01:25:44.521166   69667 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yka0qy.3xtltx860lll1y1s \
	I0924 01:25:44.521282   69667 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:25:44.522646   69667 kubeadm.go:310] W0924 01:25:33.809735     824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:25:44.523022   69667 kubeadm.go:310] W0924 01:25:33.810621     824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:25:44.523173   69667 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:25:44.523214   69667 cni.go:84] Creating CNI manager for ""
	I0924 01:25:44.523225   69667 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:25:44.525339   69667 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:25:39.896036   70464 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:25:39.896092   70464 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 01:25:39.896122   70464 cache.go:56] Caching tarball of preloaded images
	I0924 01:25:39.896225   70464 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:25:39.896241   70464 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 01:25:39.896416   70464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/config.json ...
	I0924 01:25:39.896639   70464 start.go:360] acquireMachinesLock for newest-cni-185978: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:25:43.738398   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:43.738954   70020 main.go:141] libmachine: (kindnet-447054) DBG | unable to find current IP address of domain kindnet-447054 in network mk-kindnet-447054
	I0924 01:25:43.738978   70020 main.go:141] libmachine: (kindnet-447054) DBG | I0924 01:25:43.738851   70176 retry.go:31] will retry after 3.283258159s: waiting for machine to come up
	I0924 01:25:47.024111   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:47.024657   70020 main.go:141] libmachine: (kindnet-447054) DBG | unable to find current IP address of domain kindnet-447054 in network mk-kindnet-447054
	I0924 01:25:47.024686   70020 main.go:141] libmachine: (kindnet-447054) DBG | I0924 01:25:47.024595   70176 retry.go:31] will retry after 4.228023611s: waiting for machine to come up
	I0924 01:25:44.526888   69667 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:25:44.539333   69667 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:25:44.556015   69667 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:25:44.556074   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:44.556121   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-447054 minikube.k8s.io/updated_at=2024_09_24T01_25_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=auto-447054 minikube.k8s.io/primary=true
	I0924 01:25:44.583066   69667 ops.go:34] apiserver oom_adj: -16
	I0924 01:25:44.716578   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:45.216949   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:45.717480   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:46.217350   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:46.717227   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:47.217535   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:47.717023   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:48.216868   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:48.717380   69667 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:25:48.798533   69667 kubeadm.go:1113] duration metric: took 4.242520718s to wait for elevateKubeSystemPrivileges
	I0924 01:25:48.798573   69667 kubeadm.go:394] duration metric: took 15.150608794s to StartCluster
	I0924 01:25:48.798595   69667 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:48.798671   69667 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:25:48.799700   69667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:48.799928   69667 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:25:48.799968   69667 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:25:48.800045   69667 addons.go:69] Setting storage-provisioner=true in profile "auto-447054"
	I0924 01:25:48.799969   69667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 01:25:48.800071   69667 addons.go:234] Setting addon storage-provisioner=true in "auto-447054"
	I0924 01:25:48.800098   69667 host.go:66] Checking if "auto-447054" exists ...
	I0924 01:25:48.800127   69667 addons.go:69] Setting default-storageclass=true in profile "auto-447054"
	I0924 01:25:48.800164   69667 config.go:182] Loaded profile config "auto-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:25:48.800167   69667 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-447054"
	I0924 01:25:48.800535   69667 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:48.800584   69667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:48.800733   69667 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:48.800779   69667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:48.801822   69667 out.go:177] * Verifying Kubernetes components...
	I0924 01:25:48.803477   69667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:25:48.817029   69667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0924 01:25:48.817029   69667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I0924 01:25:48.817633   69667 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:48.817641   69667 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:48.818147   69667 main.go:141] libmachine: Using API Version  1
	I0924 01:25:48.818171   69667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:48.818283   69667 main.go:141] libmachine: Using API Version  1
	I0924 01:25:48.818308   69667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:48.818523   69667 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:48.818642   69667 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:48.818703   69667 main.go:141] libmachine: (auto-447054) Calling .GetState
	I0924 01:25:48.819250   69667 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:48.819315   69667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:48.822443   69667 addons.go:234] Setting addon default-storageclass=true in "auto-447054"
	I0924 01:25:48.822483   69667 host.go:66] Checking if "auto-447054" exists ...
	I0924 01:25:48.822832   69667 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:48.822873   69667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:48.836247   69667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45503
	I0924 01:25:48.836928   69667 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:48.837612   69667 main.go:141] libmachine: Using API Version  1
	I0924 01:25:48.837640   69667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:48.838013   69667 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:48.838219   69667 main.go:141] libmachine: (auto-447054) Calling .GetState
	I0924 01:25:48.839120   69667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0924 01:25:48.839481   69667 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:48.839999   69667 main.go:141] libmachine: Using API Version  1
	I0924 01:25:48.840050   69667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:48.840113   69667 main.go:141] libmachine: (auto-447054) Calling .DriverName
	I0924 01:25:48.840479   69667 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:48.841059   69667 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:48.841102   69667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:48.842727   69667 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:25:48.844218   69667 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:25:48.844241   69667 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:25:48.844264   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHHostname
	I0924 01:25:48.847848   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:48.848361   69667 main.go:141] libmachine: (auto-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:9e:05", ip: ""} in network mk-auto-447054: {Iface:virbr4 ExpiryTime:2024-09-24 02:25:19 +0000 UTC Type:0 Mac:52:54:00:38:9e:05 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:auto-447054 Clientid:01:52:54:00:38:9e:05}
	I0924 01:25:48.848388   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined IP address 192.168.50.23 and MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:48.848619   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHPort
	I0924 01:25:48.848827   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHKeyPath
	I0924 01:25:48.849048   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHUsername
	I0924 01:25:48.849218   69667 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054/id_rsa Username:docker}
	I0924 01:25:48.858191   69667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0924 01:25:48.858613   69667 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:48.859103   69667 main.go:141] libmachine: Using API Version  1
	I0924 01:25:48.859132   69667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:48.859475   69667 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:48.859664   69667 main.go:141] libmachine: (auto-447054) Calling .GetState
	I0924 01:25:48.861385   69667 main.go:141] libmachine: (auto-447054) Calling .DriverName
	I0924 01:25:48.861586   69667 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:25:48.861603   69667 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:25:48.861620   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHHostname
	I0924 01:25:48.864445   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:48.864896   69667 main.go:141] libmachine: (auto-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:9e:05", ip: ""} in network mk-auto-447054: {Iface:virbr4 ExpiryTime:2024-09-24 02:25:19 +0000 UTC Type:0 Mac:52:54:00:38:9e:05 Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:auto-447054 Clientid:01:52:54:00:38:9e:05}
	I0924 01:25:48.864927   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined IP address 192.168.50.23 and MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:48.865136   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHPort
	I0924 01:25:48.865306   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHKeyPath
	I0924 01:25:48.865492   69667 main.go:141] libmachine: (auto-447054) Calling .GetSSHUsername
	I0924 01:25:48.865629   69667 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054/id_rsa Username:docker}
	I0924 01:25:49.023919   69667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:25:49.023949   69667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 01:25:49.087761   69667 node_ready.go:35] waiting up to 15m0s for node "auto-447054" to be "Ready" ...
	I0924 01:25:49.116862   69667 node_ready.go:49] node "auto-447054" has status "Ready":"True"
	I0924 01:25:49.116890   69667 node_ready.go:38] duration metric: took 29.079256ms for node "auto-447054" to be "Ready" ...
	I0924 01:25:49.116903   69667 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:25:49.146364   69667 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace to be "Ready" ...
	I0924 01:25:49.162644   69667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:25:49.172204   69667 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:25:49.634115   69667 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0924 01:25:49.634262   69667 main.go:141] libmachine: Making call to close driver server
	I0924 01:25:49.634296   69667 main.go:141] libmachine: (auto-447054) Calling .Close
	I0924 01:25:49.634595   69667 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:25:49.634614   69667 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:25:49.634624   69667 main.go:141] libmachine: Making call to close driver server
	I0924 01:25:49.634631   69667 main.go:141] libmachine: (auto-447054) Calling .Close
	I0924 01:25:49.634896   69667 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:25:49.634924   69667 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:25:49.634931   69667 main.go:141] libmachine: (auto-447054) DBG | Closing plugin on server side
	I0924 01:25:49.656752   69667 main.go:141] libmachine: Making call to close driver server
	I0924 01:25:49.656784   69667 main.go:141] libmachine: (auto-447054) Calling .Close
	I0924 01:25:49.657111   69667 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:25:49.657130   69667 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:25:49.657159   69667 main.go:141] libmachine: (auto-447054) DBG | Closing plugin on server side
	I0924 01:25:50.141467   69667 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-447054" context rescaled to 1 replicas
	I0924 01:25:50.233398   69667 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.061154791s)
	I0924 01:25:50.233454   69667 main.go:141] libmachine: Making call to close driver server
	I0924 01:25:50.233466   69667 main.go:141] libmachine: (auto-447054) Calling .Close
	I0924 01:25:50.233774   69667 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:25:50.233796   69667 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:25:50.233801   69667 main.go:141] libmachine: (auto-447054) DBG | Closing plugin on server side
	I0924 01:25:50.233810   69667 main.go:141] libmachine: Making call to close driver server
	I0924 01:25:50.233819   69667 main.go:141] libmachine: (auto-447054) Calling .Close
	I0924 01:25:50.234052   69667 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:25:50.234121   69667 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:25:50.234087   69667 main.go:141] libmachine: (auto-447054) DBG | Closing plugin on server side
	I0924 01:25:50.236834   69667 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0924 01:25:52.862537   70464 start.go:364] duration metric: took 12.965847341s to acquireMachinesLock for "newest-cni-185978"
	I0924 01:25:52.862597   70464 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:25:52.862606   70464 fix.go:54] fixHost starting: 
	I0924 01:25:52.863093   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:52.863144   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:52.882085   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0924 01:25:52.882561   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:52.883039   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:25:52.883065   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:52.883493   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:52.883725   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:52.883883   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetState
	I0924 01:25:52.885538   70464 fix.go:112] recreateIfNeeded on newest-cni-185978: state=Stopped err=<nil>
	I0924 01:25:52.885564   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	W0924 01:25:52.885739   70464 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:25:52.887984   70464 out.go:177] * Restarting existing kvm2 VM for "newest-cni-185978" ...
	I0924 01:25:51.254500   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.254897   70020 main.go:141] libmachine: (kindnet-447054) Found IP for machine: 192.168.39.50
	I0924 01:25:51.254921   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has current primary IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.254929   70020 main.go:141] libmachine: (kindnet-447054) Reserving static IP address...
	I0924 01:25:51.255238   70020 main.go:141] libmachine: (kindnet-447054) DBG | unable to find host DHCP lease matching {name: "kindnet-447054", mac: "52:54:00:ab:ee:c9", ip: "192.168.39.50"} in network mk-kindnet-447054
	I0924 01:25:51.334471   70020 main.go:141] libmachine: (kindnet-447054) DBG | Getting to WaitForSSH function...
	I0924 01:25:51.334500   70020 main.go:141] libmachine: (kindnet-447054) Reserved static IP address: 192.168.39.50
	I0924 01:25:51.334511   70020 main.go:141] libmachine: (kindnet-447054) Waiting for SSH to be available...
	I0924 01:25:51.337608   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.338072   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:51.338115   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.338289   70020 main.go:141] libmachine: (kindnet-447054) DBG | Using SSH client type: external
	I0924 01:25:51.338316   70020 main.go:141] libmachine: (kindnet-447054) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa (-rw-------)
	I0924 01:25:51.338345   70020 main.go:141] libmachine: (kindnet-447054) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:25:51.338386   70020 main.go:141] libmachine: (kindnet-447054) DBG | About to run SSH command:
	I0924 01:25:51.338411   70020 main.go:141] libmachine: (kindnet-447054) DBG | exit 0
	I0924 01:25:51.460592   70020 main.go:141] libmachine: (kindnet-447054) DBG | SSH cmd err, output: <nil>: 
	I0924 01:25:51.460919   70020 main.go:141] libmachine: (kindnet-447054) KVM machine creation complete!
	I0924 01:25:51.461250   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetConfigRaw
	I0924 01:25:51.461792   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:25:51.462002   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:25:51.462129   70020 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 01:25:51.462152   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetState
	I0924 01:25:51.463762   70020 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 01:25:51.463775   70020 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 01:25:51.463780   70020 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 01:25:51.463785   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:51.466007   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.466390   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:51.466412   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.466573   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:51.466741   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.466887   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.466997   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:51.467136   70020 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:51.467323   70020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0924 01:25:51.467335   70020 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 01:25:51.571871   70020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:25:51.571905   70020 main.go:141] libmachine: Detecting the provisioner...
	I0924 01:25:51.571917   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:51.574940   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.575337   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:51.575365   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.575556   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:51.575753   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.575967   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.576134   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:51.576315   70020 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:51.576498   70020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0924 01:25:51.576510   70020 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 01:25:51.680735   70020 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 01:25:51.680823   70020 main.go:141] libmachine: found compatible host: buildroot
	I0924 01:25:51.680834   70020 main.go:141] libmachine: Provisioning with buildroot...
	I0924 01:25:51.680842   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetMachineName
	I0924 01:25:51.681121   70020 buildroot.go:166] provisioning hostname "kindnet-447054"
	I0924 01:25:51.681147   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetMachineName
	I0924 01:25:51.681332   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:51.684026   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.684469   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:51.684493   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.684674   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:51.684861   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.685014   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.685152   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:51.685296   70020 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:51.685468   70020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0924 01:25:51.685480   70020 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-447054 && echo "kindnet-447054" | sudo tee /etc/hostname
	I0924 01:25:51.802413   70020 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-447054
	
	I0924 01:25:51.802462   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:51.805564   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.805985   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:51.806018   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.806214   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:51.806464   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.806662   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:51.806844   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:51.807037   70020 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:51.807241   70020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0924 01:25:51.807259   70020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-447054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-447054/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-447054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:25:51.916699   70020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:25:51.916728   70020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:25:51.916786   70020 buildroot.go:174] setting up certificates
	I0924 01:25:51.916803   70020 provision.go:84] configureAuth start
	I0924 01:25:51.916820   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetMachineName
	I0924 01:25:51.917092   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetIP
	I0924 01:25:51.919532   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.919924   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:51.919951   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.920074   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:51.922383   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.922728   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:51.922765   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:51.923037   70020 provision.go:143] copyHostCerts
	I0924 01:25:51.923099   70020 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:25:51.923118   70020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:25:51.923184   70020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:25:51.923313   70020 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:25:51.923323   70020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:25:51.923349   70020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:25:51.923422   70020 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:25:51.923435   70020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:25:51.923464   70020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:25:51.923564   70020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.kindnet-447054 san=[127.0.0.1 192.168.39.50 kindnet-447054 localhost minikube]
	I0924 01:25:52.235028   70020 provision.go:177] copyRemoteCerts
	I0924 01:25:52.235114   70020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:25:52.235143   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:52.237933   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.238310   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.238357   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.238466   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:52.238702   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.238894   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:52.239032   70020 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa Username:docker}
	I0924 01:25:52.318850   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:25:52.343853   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0924 01:25:52.367784   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:25:52.391664   70020 provision.go:87] duration metric: took 474.844394ms to configureAuth
	I0924 01:25:52.391694   70020 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:25:52.391869   70020 config.go:182] Loaded profile config "kindnet-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:25:52.391996   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:52.394975   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.395446   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.395480   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.395727   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:52.395950   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.396171   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.396363   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:52.396562   70020 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:52.396711   70020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0924 01:25:52.396726   70020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:25:52.618156   70020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:25:52.618185   70020 main.go:141] libmachine: Checking connection to Docker...
	I0924 01:25:52.618195   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetURL
	I0924 01:25:52.619564   70020 main.go:141] libmachine: (kindnet-447054) DBG | Using libvirt version 6000000
	I0924 01:25:52.621945   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.622377   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.622413   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.622653   70020 main.go:141] libmachine: Docker is up and running!
	I0924 01:25:52.622667   70020 main.go:141] libmachine: Reticulating splines...
	I0924 01:25:52.622674   70020 client.go:171] duration metric: took 25.820903437s to LocalClient.Create
	I0924 01:25:52.622694   70020 start.go:167] duration metric: took 25.820980889s to libmachine.API.Create "kindnet-447054"
	I0924 01:25:52.622700   70020 start.go:293] postStartSetup for "kindnet-447054" (driver="kvm2")
	I0924 01:25:52.622712   70020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:25:52.622727   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:25:52.623031   70020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:25:52.623060   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:52.625614   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.625999   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.626029   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.626122   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:52.626329   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.626484   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:52.626630   70020 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa Username:docker}
	I0924 01:25:52.706412   70020 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:25:52.710374   70020 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:25:52.710424   70020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:25:52.710495   70020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:25:52.710637   70020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:25:52.710764   70020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:25:52.721031   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:25:52.747499   70020 start.go:296] duration metric: took 124.783836ms for postStartSetup
	I0924 01:25:52.747548   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetConfigRaw
	I0924 01:25:52.748167   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetIP
	I0924 01:25:52.751028   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.751511   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.751535   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.751907   70020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/config.json ...
	I0924 01:25:52.752132   70020 start.go:128] duration metric: took 25.973331291s to createHost
	I0924 01:25:52.752161   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:52.754573   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.754884   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.754921   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.755024   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:52.755195   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.755434   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.755627   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:52.755767   70020 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:52.755944   70020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0924 01:25:52.755961   70020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:25:52.862323   70020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727141152.835470285
	
	I0924 01:25:52.862348   70020 fix.go:216] guest clock: 1727141152.835470285
	I0924 01:25:52.862355   70020 fix.go:229] Guest: 2024-09-24 01:25:52.835470285 +0000 UTC Remote: 2024-09-24 01:25:52.752148117 +0000 UTC m=+39.588755123 (delta=83.322168ms)
	I0924 01:25:52.862410   70020 fix.go:200] guest clock delta is within tolerance: 83.322168ms
	I0924 01:25:52.862418   70020 start.go:83] releasing machines lock for "kindnet-447054", held for 26.083790277s
	I0924 01:25:52.862447   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:25:52.862725   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetIP
	I0924 01:25:52.866056   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.866449   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.866479   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.866684   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:25:52.867460   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:25:52.867686   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:25:52.867788   70020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:25:52.867845   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:52.868169   70020 ssh_runner.go:195] Run: cat /version.json
	I0924 01:25:52.868193   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:25:52.871159   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.871346   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.871556   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.871580   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.871644   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:52.871674   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:52.871693   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:52.871823   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:25:52.871895   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.871955   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:25:52.872052   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:52.872089   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:25:52.872172   70020 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa Username:docker}
	I0924 01:25:52.872382   70020 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa Username:docker}
	I0924 01:25:52.954116   70020 ssh_runner.go:195] Run: systemctl --version
	I0924 01:25:52.996274   70020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:25:53.156485   70020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:25:53.163265   70020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:25:53.163346   70020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:25:53.183987   70020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:25:53.184010   70020 start.go:495] detecting cgroup driver to use...
	I0924 01:25:53.184235   70020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:25:53.201326   70020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:25:50.238310   69667 addons.go:510] duration metric: took 1.438290692s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0924 01:25:51.155088   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:25:53.218078   70020 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:25:53.218153   70020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:25:53.232593   70020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:25:53.247159   70020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:25:53.373844   70020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:25:53.546583   70020 docker.go:233] disabling docker service ...
	I0924 01:25:53.546642   70020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:25:53.568152   70020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:25:53.582923   70020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:25:53.713610   70020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:25:53.845479   70020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:25:53.860268   70020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:25:53.879157   70020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:25:53.879226   70020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:53.890839   70020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:25:53.890917   70020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:53.903477   70020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:53.915361   70020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:53.926828   70020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:25:53.940014   70020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:53.950754   70020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:53.972728   70020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:53.984303   70020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:25:53.994232   70020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:25:53.994288   70020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:25:54.008370   70020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:25:54.018531   70020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:25:54.144096   70020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:25:54.267792   70020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:25:54.267886   70020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:25:54.272483   70020 start.go:563] Will wait 60s for crictl version
	I0924 01:25:54.272551   70020 ssh_runner.go:195] Run: which crictl
	I0924 01:25:54.276086   70020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:25:54.314748   70020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:25:54.314863   70020 ssh_runner.go:195] Run: crio --version
	I0924 01:25:54.343581   70020 ssh_runner.go:195] Run: crio --version
	I0924 01:25:54.372920   70020 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:25:52.889137   70464 main.go:141] libmachine: (newest-cni-185978) Calling .Start
	I0924 01:25:52.889408   70464 main.go:141] libmachine: (newest-cni-185978) Ensuring networks are active...
	I0924 01:25:52.890287   70464 main.go:141] libmachine: (newest-cni-185978) Ensuring network default is active
	I0924 01:25:52.890664   70464 main.go:141] libmachine: (newest-cni-185978) Ensuring network mk-newest-cni-185978 is active
	I0924 01:25:52.891073   70464 main.go:141] libmachine: (newest-cni-185978) Getting domain xml...
	I0924 01:25:52.891876   70464 main.go:141] libmachine: (newest-cni-185978) Creating domain...
	I0924 01:25:54.240586   70464 main.go:141] libmachine: (newest-cni-185978) Waiting to get IP...
	I0924 01:25:54.241532   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:54.241912   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:54.242024   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:54.241912   70590 retry.go:31] will retry after 194.001515ms: waiting for machine to come up
	I0924 01:25:54.437598   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:54.438112   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:54.438137   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:54.438069   70590 retry.go:31] will retry after 364.517713ms: waiting for machine to come up
	I0924 01:25:54.374228   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetIP
	I0924 01:25:54.377130   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:54.377451   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:25:54.377482   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:25:54.377757   70020 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 01:25:54.381654   70020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:25:54.394367   70020 kubeadm.go:883] updating cluster {Name:kindnet-447054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:kindnet-447054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:25:54.394466   70020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:25:54.394523   70020 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:25:54.428049   70020 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:25:54.428125   70020 ssh_runner.go:195] Run: which lz4
	I0924 01:25:54.432913   70020 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:25:54.437719   70020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:25:54.437755   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:25:55.799440   70020 crio.go:462] duration metric: took 1.36655184s to copy over tarball
	I0924 01:25:55.799518   70020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:25:58.101861   70020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.302314344s)
	I0924 01:25:58.101887   70020 crio.go:469] duration metric: took 2.302416359s to extract the tarball
	I0924 01:25:58.101896   70020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:25:58.140057   70020 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:25:58.196862   70020 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:25:58.196887   70020 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:25:58.196897   70020 kubeadm.go:934] updating node { 192.168.39.50 8443 v1.31.1 crio true true} ...
	I0924 01:25:58.197008   70020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-447054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kindnet-447054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0924 01:25:58.197087   70020 ssh_runner.go:195] Run: crio config
	I0924 01:25:53.654112   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:25:55.654728   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:25:58.152486   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:25:54.804699   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:54.805375   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:54.805399   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:54.805332   70590 retry.go:31] will retry after 369.456781ms: waiting for machine to come up
	I0924 01:25:55.177379   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:55.177880   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:55.177902   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:55.177854   70590 retry.go:31] will retry after 604.909041ms: waiting for machine to come up
	I0924 01:25:55.784619   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:55.785214   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:55.785245   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:55.785158   70590 retry.go:31] will retry after 478.077528ms: waiting for machine to come up
	I0924 01:25:56.265021   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:56.265675   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:56.265707   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:56.265616   70590 retry.go:31] will retry after 573.553478ms: waiting for machine to come up
	I0924 01:25:56.840497   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:56.841031   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:56.841059   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:56.840969   70590 retry.go:31] will retry after 805.811614ms: waiting for machine to come up
	I0924 01:25:57.649158   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:57.649810   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:57.649839   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:57.649720   70590 retry.go:31] will retry after 1.225337055s: waiting for machine to come up
	I0924 01:25:58.876801   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:58.877387   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:25:58.877412   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:25:58.877362   70590 retry.go:31] will retry after 1.269120533s: waiting for machine to come up
	I0924 01:25:58.243635   70020 cni.go:84] Creating CNI manager for "kindnet"
	I0924 01:25:58.243658   70020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:25:58.243679   70020 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-447054 NodeName:kindnet-447054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:25:58.243836   70020 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-447054"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:25:58.243905   70020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:25:58.253922   70020 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:25:58.253998   70020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:25:58.264151   70020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 01:25:58.280614   70020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:25:58.295695   70020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0924 01:25:58.313939   70020 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0924 01:25:58.318498   70020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:25:58.330749   70020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:25:58.461059   70020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:25:58.479116   70020 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054 for IP: 192.168.39.50
	I0924 01:25:58.479142   70020 certs.go:194] generating shared ca certs ...
	I0924 01:25:58.479162   70020 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:58.479364   70020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:25:58.479430   70020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:25:58.479442   70020 certs.go:256] generating profile certs ...
	I0924 01:25:58.479519   70020 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/client.key
	I0924 01:25:58.479541   70020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/client.crt with IP's: []
	I0924 01:25:58.744586   70020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/client.crt ...
	I0924 01:25:58.744617   70020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/client.crt: {Name:mkcacfe48f07ac042b4e2dfbfd53201c9dcd0ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:58.744784   70020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/client.key ...
	I0924 01:25:58.744795   70020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/client.key: {Name:mk675c0a87a75588ee7f16dc2aad0ef4d680fc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:58.744872   70020 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.key.de35df75
	I0924 01:25:58.744887   70020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.crt.de35df75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I0924 01:25:58.925173   70020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.crt.de35df75 ...
	I0924 01:25:58.925208   70020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.crt.de35df75: {Name:mk61f3f1da6b981539e021f666a4d14e6f45d6ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:58.925394   70020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.key.de35df75 ...
	I0924 01:25:58.925408   70020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.key.de35df75: {Name:mk3f4c304c5775da233b05291a7db40f2be9284e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:58.925489   70020 certs.go:381] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.crt.de35df75 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.crt
	I0924 01:25:58.925583   70020 certs.go:385] copying /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.key.de35df75 -> /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.key
	I0924 01:25:58.925650   70020 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.key
	I0924 01:25:58.925673   70020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.crt with IP's: []
	I0924 01:25:59.111594   70020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.crt ...
	I0924 01:25:59.111630   70020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.crt: {Name:mk221dc7155740c5a26ae013b2e696cd43c4811c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:59.111816   70020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.key ...
	I0924 01:25:59.111827   70020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.key: {Name:mkbe41a3de4e35f48c7f27d5156143cdd48d6105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:59.112019   70020 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:25:59.112065   70020 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:25:59.112080   70020 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:25:59.112107   70020 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:25:59.112129   70020 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:25:59.112150   70020 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:25:59.112192   70020 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:25:59.112866   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:25:59.140547   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:25:59.192350   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:25:59.225934   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:25:59.254734   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 01:25:59.283587   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:25:59.312145   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:25:59.338564   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/kindnet-447054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:25:59.364764   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:25:59.389269   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:25:59.414443   70020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:25:59.439017   70020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:25:59.456869   70020 ssh_runner.go:195] Run: openssl version
	I0924 01:25:59.463360   70020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:25:59.474760   70020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:25:59.479832   70020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:25:59.479902   70020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:25:59.485890   70020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:25:59.497806   70020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:25:59.509452   70020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:25:59.514021   70020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:25:59.514107   70020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:25:59.520196   70020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:25:59.532459   70020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:25:59.544757   70020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:25:59.549461   70020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:25:59.549517   70020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:25:59.555323   70020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:25:59.567455   70020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:25:59.571738   70020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 01:25:59.571802   70020 kubeadm.go:392] StartCluster: {Name:kindnet-447054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:kindnet-447054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:25:59.571956   70020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:25:59.572016   70020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:25:59.612133   70020 cri.go:89] found id: ""
	I0924 01:25:59.612211   70020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:25:59.623370   70020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:25:59.633620   70020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:25:59.643665   70020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:25:59.643694   70020 kubeadm.go:157] found existing configuration files:
	
	I0924 01:25:59.643750   70020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:25:59.655165   70020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:25:59.655239   70020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:25:59.666418   70020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:25:59.676924   70020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:25:59.676993   70020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:25:59.687576   70020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:25:59.698208   70020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:25:59.698268   70020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:25:59.709377   70020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:25:59.719236   70020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:25:59.719298   70020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:25:59.729142   70020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:25:59.793132   70020 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:25:59.793208   70020 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:25:59.925471   70020 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:25:59.925607   70020 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:25:59.925722   70020 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:25:59.933982   70020 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:26:00.110651   70020 out.go:235]   - Generating certificates and keys ...
	I0924 01:26:00.110784   70020 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:26:00.110897   70020 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:26:00.211024   70020 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 01:26:00.366624   70020 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 01:26:00.431577   70020 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 01:26:00.564307   70020 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 01:26:00.651567   70020 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 01:26:00.651943   70020 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-447054 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0924 01:26:00.740625   70020 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 01:26:00.740841   70020 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-447054 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0924 01:26:00.817475   70020 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 01:26:01.012704   70020 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 01:26:01.110655   70020 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 01:26:01.110872   70020 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:26:01.215052   70020 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:26:01.471887   70020 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:26:01.572550   70020 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:26:01.673173   70020 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:26:01.813727   70020 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:26:01.814538   70020 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:26:01.819185   70020 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:26:01.821357   70020 out.go:235]   - Booting up control plane ...
	I0924 01:26:01.821476   70020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:26:01.821592   70020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:26:01.821844   70020 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:26:01.841268   70020 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:26:01.850770   70020 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:26:01.850823   70020 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:26:01.980677   70020 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:26:01.980848   70020 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:26:02.980514   70020 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001041593s
	I0924 01:26:02.980611   70020 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:26:00.154289   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:02.653827   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:00.148934   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:00.149541   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:26:00.149570   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:26:00.149482   70590 retry.go:31] will retry after 2.129324219s: waiting for machine to come up
	I0924 01:26:02.281751   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:02.282273   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:26:02.282302   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:26:02.282223   70590 retry.go:31] will retry after 2.740373767s: waiting for machine to come up
	I0924 01:26:07.979029   70020 kubeadm.go:310] [api-check] The API server is healthy after 5.00168669s
	I0924 01:26:07.995580   70020 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:26:08.015432   70020 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:26:08.045755   70020 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:26:08.046004   70020 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-447054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:26:08.057558   70020 kubeadm.go:310] [bootstrap-token] Using token: 7srq1o.yf15q4avs33dlfdq
	I0924 01:26:08.058940   70020 out.go:235]   - Configuring RBAC rules ...
	I0924 01:26:08.059087   70020 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:26:08.068728   70020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:26:08.078602   70020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:26:08.082360   70020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:26:08.090074   70020 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:26:08.095926   70020 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:26:05.153771   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:07.153906   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:08.390230   70020 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:26:08.812997   70020 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:26:09.385764   70020 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:26:09.386656   70020 kubeadm.go:310] 
	I0924 01:26:09.386749   70020 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:26:09.386778   70020 kubeadm.go:310] 
	I0924 01:26:09.386891   70020 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:26:09.386905   70020 kubeadm.go:310] 
	I0924 01:26:09.386936   70020 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:26:09.387014   70020 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:26:09.387063   70020 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:26:09.387070   70020 kubeadm.go:310] 
	I0924 01:26:09.387153   70020 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:26:09.387169   70020 kubeadm.go:310] 
	I0924 01:26:09.387225   70020 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:26:09.387236   70020 kubeadm.go:310] 
	I0924 01:26:09.387310   70020 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:26:09.387417   70020 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:26:09.387538   70020 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:26:09.387550   70020 kubeadm.go:310] 
	I0924 01:26:09.387658   70020 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:26:09.387755   70020 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:26:09.387766   70020 kubeadm.go:310] 
	I0924 01:26:09.387863   70020 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7srq1o.yf15q4avs33dlfdq \
	I0924 01:26:09.387967   70020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:26:09.387987   70020 kubeadm.go:310] 	--control-plane 
	I0924 01:26:09.387991   70020 kubeadm.go:310] 
	I0924 01:26:09.388115   70020 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:26:09.388134   70020 kubeadm.go:310] 
	I0924 01:26:09.388247   70020 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7srq1o.yf15q4avs33dlfdq \
	I0924 01:26:09.388418   70020 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:26:09.389214   70020 kubeadm.go:310] W0924 01:25:59.770834     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:26:09.389552   70020 kubeadm.go:310] W0924 01:25:59.772153     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:26:09.389670   70020 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:26:09.389699   70020 cni.go:84] Creating CNI manager for "kindnet"
	I0924 01:26:09.391576   70020 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 01:26:05.023953   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:05.024602   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:26:05.024626   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:26:05.024543   70590 retry.go:31] will retry after 3.44237719s: waiting for machine to come up
	I0924 01:26:08.468405   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:08.468879   70464 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:26:08.468901   70464 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:26:08.468859   70590 retry.go:31] will retry after 4.450104528s: waiting for machine to come up
	I0924 01:26:09.392903   70020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 01:26:09.398962   70020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 01:26:09.398985   70020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 01:26:09.420414   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 01:26:09.700917   70020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:26:09.700973   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:09.700998   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-447054 minikube.k8s.io/updated_at=2024_09_24T01_26_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=kindnet-447054 minikube.k8s.io/primary=true
	I0924 01:26:09.852487   70020 ops.go:34] apiserver oom_adj: -16
	I0924 01:26:09.877885   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:10.378697   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:10.878878   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:11.378685   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:11.878861   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:12.378943   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:12.878598   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:13.378037   70020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:26:13.481099   70020 kubeadm.go:1113] duration metric: took 3.78018527s to wait for elevateKubeSystemPrivileges
	I0924 01:26:13.481143   70020 kubeadm.go:394] duration metric: took 13.909343751s to StartCluster
	I0924 01:26:13.481165   70020 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:26:13.481260   70020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:26:13.482837   70020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:26:13.483134   70020 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:26:13.483194   70020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 01:26:13.483242   70020 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:26:13.483360   70020 addons.go:69] Setting storage-provisioner=true in profile "kindnet-447054"
	I0924 01:26:13.483365   70020 addons.go:69] Setting default-storageclass=true in profile "kindnet-447054"
	I0924 01:26:13.483380   70020 addons.go:234] Setting addon storage-provisioner=true in "kindnet-447054"
	I0924 01:26:13.483383   70020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-447054"
	I0924 01:26:13.483401   70020 config.go:182] Loaded profile config "kindnet-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:26:13.483412   70020 host.go:66] Checking if "kindnet-447054" exists ...
	I0924 01:26:13.483846   70020 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:13.483883   70020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:13.483848   70020 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:13.483990   70020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:13.484959   70020 out.go:177] * Verifying Kubernetes components...
	I0924 01:26:13.486384   70020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:26:13.500231   70020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45799
	I0924 01:26:13.500744   70020 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:13.501353   70020 main.go:141] libmachine: Using API Version  1
	I0924 01:26:13.501377   70020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:13.501723   70020 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:13.501952   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetState
	I0924 01:26:13.504234   70020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0924 01:26:13.504735   70020 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:13.505205   70020 main.go:141] libmachine: Using API Version  1
	I0924 01:26:13.505225   70020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:13.505600   70020 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:13.505786   70020 addons.go:234] Setting addon default-storageclass=true in "kindnet-447054"
	I0924 01:26:13.505830   70020 host.go:66] Checking if "kindnet-447054" exists ...
	I0924 01:26:13.506181   70020 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:13.506215   70020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:13.506339   70020 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:13.506371   70020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:13.525681   70020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0924 01:26:13.526262   70020 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:13.526506   70020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0924 01:26:13.526793   70020 main.go:141] libmachine: Using API Version  1
	I0924 01:26:13.526815   70020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:13.526834   70020 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:13.527192   70020 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:13.527317   70020 main.go:141] libmachine: Using API Version  1
	I0924 01:26:13.527345   70020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:13.527431   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetState
	I0924 01:26:13.527708   70020 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:13.528369   70020 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:13.528407   70020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:13.529551   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:26:13.531526   70020 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:26:09.653356   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:12.151967   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:12.920829   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:12.921352   70464 main.go:141] libmachine: (newest-cni-185978) Found IP for machine: 192.168.72.50
	I0924 01:26:12.921376   70464 main.go:141] libmachine: (newest-cni-185978) Reserving static IP address...
	I0924 01:26:12.921390   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has current primary IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:12.921798   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "newest-cni-185978", mac: "52:54:00:fa:98:80", ip: "192.168.72.50"} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:12.921825   70464 main.go:141] libmachine: (newest-cni-185978) Reserved static IP address: 192.168.72.50
	I0924 01:26:12.921842   70464 main.go:141] libmachine: (newest-cni-185978) DBG | skip adding static IP to network mk-newest-cni-185978 - found existing host DHCP lease matching {name: "newest-cni-185978", mac: "52:54:00:fa:98:80", ip: "192.168.72.50"}
	I0924 01:26:12.921856   70464 main.go:141] libmachine: (newest-cni-185978) Waiting for SSH to be available...
	I0924 01:26:12.921864   70464 main.go:141] libmachine: (newest-cni-185978) DBG | Getting to WaitForSSH function...
	I0924 01:26:12.924264   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:12.924762   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:12.924797   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:12.924972   70464 main.go:141] libmachine: (newest-cni-185978) DBG | Using SSH client type: external
	I0924 01:26:12.924998   70464 main.go:141] libmachine: (newest-cni-185978) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa (-rw-------)
	I0924 01:26:12.925033   70464 main.go:141] libmachine: (newest-cni-185978) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:26:12.925050   70464 main.go:141] libmachine: (newest-cni-185978) DBG | About to run SSH command:
	I0924 01:26:12.925082   70464 main.go:141] libmachine: (newest-cni-185978) DBG | exit 0
	I0924 01:26:13.052278   70464 main.go:141] libmachine: (newest-cni-185978) DBG | SSH cmd err, output: <nil>: 
	I0924 01:26:13.052676   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetConfigRaw
	I0924 01:26:13.053275   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:26:13.055876   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.056318   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.056357   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.056683   70464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/config.json ...
	I0924 01:26:13.056888   70464 machine.go:93] provisionDockerMachine start ...
	I0924 01:26:13.056907   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:13.057094   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:13.059597   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.060068   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.060096   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.060265   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:13.060472   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.060731   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.060892   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:13.061126   70464 main.go:141] libmachine: Using SSH client type: native
	I0924 01:26:13.061326   70464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:26:13.061337   70464 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:26:13.168624   70464 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:26:13.168663   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetMachineName
	I0924 01:26:13.168948   70464 buildroot.go:166] provisioning hostname "newest-cni-185978"
	I0924 01:26:13.168980   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetMachineName
	I0924 01:26:13.169259   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:13.172252   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.172770   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.172801   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.172941   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:13.173152   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.173344   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.173536   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:13.173723   70464 main.go:141] libmachine: Using SSH client type: native
	I0924 01:26:13.173948   70464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:26:13.173963   70464 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-185978 && echo "newest-cni-185978" | sudo tee /etc/hostname
	I0924 01:26:13.296479   70464 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-185978
	
	I0924 01:26:13.296515   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:13.299613   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.300022   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.300061   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.300190   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:13.300400   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.300637   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.300798   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:13.301013   70464 main.go:141] libmachine: Using SSH client type: native
	I0924 01:26:13.301253   70464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:26:13.301277   70464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-185978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-185978/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-185978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:26:13.418365   70464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:26:13.418402   70464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:26:13.418436   70464 buildroot.go:174] setting up certificates
	I0924 01:26:13.418452   70464 provision.go:84] configureAuth start
	I0924 01:26:13.418470   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetMachineName
	I0924 01:26:13.418830   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:26:13.421877   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.422232   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.422358   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.422441   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:13.424796   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.425263   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.425307   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.425560   70464 provision.go:143] copyHostCerts
	I0924 01:26:13.425619   70464 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:26:13.425636   70464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:26:13.425694   70464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:26:13.425786   70464 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:26:13.425794   70464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:26:13.425814   70464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:26:13.425867   70464 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:26:13.425873   70464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:26:13.425890   70464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:26:13.425931   70464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.newest-cni-185978 san=[127.0.0.1 192.168.72.50 localhost minikube newest-cni-185978]
	I0924 01:26:13.546433   70464 provision.go:177] copyRemoteCerts
	I0924 01:26:13.546488   70464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:26:13.546511   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:13.549810   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.550200   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.550226   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.550488   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:13.550657   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.550806   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:13.550939   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:13.639822   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:26:13.668696   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:26:13.694063   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:26:13.723721   70464 provision.go:87] duration metric: took 305.252193ms to configureAuth
	I0924 01:26:13.723751   70464 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:26:13.723983   70464 config.go:182] Loaded profile config "newest-cni-185978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:26:13.724069   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:13.727660   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.728083   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.728111   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.728305   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:13.728550   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.728779   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.728939   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:13.729149   70464 main.go:141] libmachine: Using SSH client type: native
	I0924 01:26:13.729345   70464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:26:13.729364   70464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:26:13.980663   70464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:26:13.980697   70464 machine.go:96] duration metric: took 923.795311ms to provisionDockerMachine
	I0924 01:26:13.980726   70464 start.go:293] postStartSetup for "newest-cni-185978" (driver="kvm2")
	I0924 01:26:13.980742   70464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:26:13.980769   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:13.981118   70464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:26:13.981154   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:13.984362   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.984798   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:13.984838   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:13.985009   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:13.985269   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:13.985452   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:13.985608   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:14.075785   70464 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:26:14.080533   70464 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:26:14.080566   70464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:26:14.080648   70464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:26:14.080741   70464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:26:14.080827   70464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:26:14.094237   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:26:14.122770   70464 start.go:296] duration metric: took 142.027467ms for postStartSetup
	I0924 01:26:14.122812   70464 fix.go:56] duration metric: took 21.260207381s for fixHost
	I0924 01:26:14.122839   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:14.126508   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.126917   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:14.126952   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.127161   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:14.127444   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:14.127652   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:14.127811   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:14.128026   70464 main.go:141] libmachine: Using SSH client type: native
	I0924 01:26:14.128235   70464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:26:14.128248   70464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:26:14.241175   70464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727141174.215784563
	
	I0924 01:26:14.241203   70464 fix.go:216] guest clock: 1727141174.215784563
	I0924 01:26:14.241213   70464 fix.go:229] Guest: 2024-09-24 01:26:14.215784563 +0000 UTC Remote: 2024-09-24 01:26:14.122816506 +0000 UTC m=+34.415250562 (delta=92.968057ms)
	I0924 01:26:14.241239   70464 fix.go:200] guest clock delta is within tolerance: 92.968057ms
	I0924 01:26:14.241246   70464 start.go:83] releasing machines lock for "newest-cni-185978", held for 21.378675964s
	I0924 01:26:14.241291   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:14.241615   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:26:14.244658   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.245050   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:14.245091   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.245282   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:14.245825   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:14.246012   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:14.246122   70464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:26:14.246169   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:14.246275   70464 ssh_runner.go:195] Run: cat /version.json
	I0924 01:26:14.246303   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:14.249543   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.249735   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.249994   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:14.250018   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.250171   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:14.250196   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:14.250289   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:14.250442   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:14.250453   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:14.250589   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:14.250641   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:14.250732   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:14.250796   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:14.250870   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:14.334862   70464 ssh_runner.go:195] Run: systemctl --version
	I0924 01:26:14.375221   70464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:26:14.519508   70464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:26:14.527996   70464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:26:14.528074   70464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:26:14.550774   70464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:26:14.550797   70464 start.go:495] detecting cgroup driver to use...
	I0924 01:26:14.550850   70464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:26:14.566961   70464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:26:14.581827   70464 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:26:14.581902   70464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:26:14.595450   70464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:26:14.609439   70464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:26:14.730266   70464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:26:13.533101   70020 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:26:13.533123   70020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:26:13.533142   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:26:13.536908   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:26:13.537398   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:26:13.537422   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:26:13.537679   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:26:13.537895   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:26:13.538043   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:26:13.538251   70020 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa Username:docker}
	I0924 01:26:13.549414   70020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
	I0924 01:26:13.549845   70020 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:13.550322   70020 main.go:141] libmachine: Using API Version  1
	I0924 01:26:13.550341   70020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:13.550703   70020 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:13.550902   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetState
	I0924 01:26:13.552639   70020 main.go:141] libmachine: (kindnet-447054) Calling .DriverName
	I0924 01:26:13.552848   70020 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:26:13.552864   70020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:26:13.552881   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHHostname
	I0924 01:26:13.555795   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:26:13.556520   70020 main.go:141] libmachine: (kindnet-447054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:ee:c9", ip: ""} in network mk-kindnet-447054: {Iface:virbr1 ExpiryTime:2024-09-24 02:25:42 +0000 UTC Type:0 Mac:52:54:00:ab:ee:c9 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:kindnet-447054 Clientid:01:52:54:00:ab:ee:c9}
	I0924 01:26:13.556577   70020 main.go:141] libmachine: (kindnet-447054) DBG | domain kindnet-447054 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:ee:c9 in network mk-kindnet-447054
	I0924 01:26:13.556856   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHPort
	I0924 01:26:13.557042   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHKeyPath
	I0924 01:26:13.557216   70020 main.go:141] libmachine: (kindnet-447054) Calling .GetSSHUsername
	I0924 01:26:13.557360   70020 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/kindnet-447054/id_rsa Username:docker}
	I0924 01:26:13.719801   70020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 01:26:13.771780   70020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:26:14.037945   70020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:26:14.041018   70020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:26:14.356565   70020 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0924 01:26:14.358003   70020 node_ready.go:35] waiting up to 15m0s for node "kindnet-447054" to be "Ready" ...
	I0924 01:26:14.399592   70020 main.go:141] libmachine: Making call to close driver server
	I0924 01:26:14.399626   70020 main.go:141] libmachine: (kindnet-447054) Calling .Close
	I0924 01:26:14.399941   70020 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:26:14.399957   70020 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:26:14.399965   70020 main.go:141] libmachine: Making call to close driver server
	I0924 01:26:14.399972   70020 main.go:141] libmachine: (kindnet-447054) Calling .Close
	I0924 01:26:14.400189   70020 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:26:14.400205   70020 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:26:14.426335   70020 main.go:141] libmachine: Making call to close driver server
	I0924 01:26:14.426357   70020 main.go:141] libmachine: (kindnet-447054) Calling .Close
	I0924 01:26:14.426719   70020 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:26:14.426733   70020 main.go:141] libmachine: (kindnet-447054) DBG | Closing plugin on server side
	I0924 01:26:14.426740   70020 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:26:14.836155   70020 main.go:141] libmachine: Making call to close driver server
	I0924 01:26:14.836182   70020 main.go:141] libmachine: (kindnet-447054) Calling .Close
	I0924 01:26:14.836522   70020 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:26:14.836541   70020 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:26:14.836551   70020 main.go:141] libmachine: Making call to close driver server
	I0924 01:26:14.836559   70020 main.go:141] libmachine: (kindnet-447054) Calling .Close
	I0924 01:26:14.836841   70020 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:26:14.836869   70020 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:26:14.839009   70020 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0924 01:26:14.905749   70464 docker.go:233] disabling docker service ...
	I0924 01:26:14.905842   70464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:26:14.924206   70464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:26:14.938386   70464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:26:15.085074   70464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:26:15.230359   70464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:26:15.243908   70464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:26:15.262834   70464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:26:15.262903   70464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:26:15.273015   70464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:26:15.273088   70464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:26:15.283097   70464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:26:15.293759   70464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:26:15.304453   70464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:26:15.315195   70464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:26:15.325903   70464 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:26:15.343401   70464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:26:15.354189   70464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:26:15.364138   70464 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:26:15.364201   70464 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:26:15.377331   70464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:26:15.386755   70464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:26:15.518150   70464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:26:15.609551   70464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:26:15.609671   70464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:26:15.615605   70464 start.go:563] Will wait 60s for crictl version
	I0924 01:26:15.615687   70464 ssh_runner.go:195] Run: which crictl
	I0924 01:26:15.620675   70464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:26:15.662866   70464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:26:15.662966   70464 ssh_runner.go:195] Run: crio --version
	I0924 01:26:15.695765   70464 ssh_runner.go:195] Run: crio --version
	I0924 01:26:15.726628   70464 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:26:15.728307   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:26:15.731673   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:15.732264   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:15.732295   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:15.732592   70464 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 01:26:15.737223   70464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:26:15.751872   70464 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0924 01:26:14.840139   70020 addons.go:510] duration metric: took 1.356905237s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0924 01:26:14.861522   70020 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-447054" context rescaled to 1 replicas
	I0924 01:26:16.363064   70020 node_ready.go:53] node "kindnet-447054" has status "Ready":"False"
	I0924 01:26:14.153772   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:16.153828   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:18.154135   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:15.753435   70464 kubeadm.go:883] updating cluster {Name:newest-cni-185978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-185978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:26:15.753561   70464 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:26:15.753620   70464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:26:15.787743   70464 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:26:15.787803   70464 ssh_runner.go:195] Run: which lz4
	I0924 01:26:15.791535   70464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:26:15.795574   70464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:26:15.795603   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:26:17.099539   70464 crio.go:462] duration metric: took 1.308026237s to copy over tarball
	I0924 01:26:17.099642   70464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:26:19.188647   70464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.088971781s)
	I0924 01:26:19.188679   70464 crio.go:469] duration metric: took 2.089100824s to extract the tarball
	I0924 01:26:19.188689   70464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:26:19.225706   70464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:26:19.266583   70464 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:26:19.266608   70464 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:26:19.266617   70464 kubeadm.go:934] updating node { 192.168.72.50 8443 v1.31.1 crio true true} ...
	I0924 01:26:19.266728   70464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-185978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-185978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:26:19.266796   70464 ssh_runner.go:195] Run: crio config
	I0924 01:26:19.310064   70464 cni.go:84] Creating CNI manager for ""
	I0924 01:26:19.310088   70464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:26:19.310098   70464 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0924 01:26:19.310121   70464 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.50 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-185978 NodeName:newest-cni-185978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:26:19.310258   70464 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-185978"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:26:19.310318   70464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:26:19.321437   70464 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:26:19.321503   70464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:26:19.332195   70464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0924 01:26:19.348455   70464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:26:19.365160   70464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0924 01:26:19.382734   70464 ssh_runner.go:195] Run: grep 192.168.72.50	control-plane.minikube.internal$ /etc/hosts
	I0924 01:26:19.386582   70464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:26:19.399099   70464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:26:19.520879   70464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:26:19.538863   70464 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978 for IP: 192.168.72.50
	I0924 01:26:19.538891   70464 certs.go:194] generating shared ca certs ...
	I0924 01:26:19.538910   70464 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:26:19.539100   70464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:26:19.539162   70464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:26:19.539179   70464 certs.go:256] generating profile certs ...
	I0924 01:26:19.539280   70464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/client.key
	I0924 01:26:19.539378   70464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/apiserver.key.aaef645b
	I0924 01:26:19.539437   70464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/proxy-client.key
	I0924 01:26:19.539610   70464 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:26:19.539660   70464 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:26:19.539675   70464 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:26:19.539710   70464 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:26:19.539743   70464 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:26:19.539775   70464 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:26:19.539849   70464 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:26:19.540555   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:26:19.587027   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:26:19.612915   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:26:19.637341   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:26:19.672071   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:26:19.702800   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 01:26:19.728727   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:26:19.753419   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:26:18.862860   70020 node_ready.go:53] node "kindnet-447054" has status "Ready":"False"
	I0924 01:26:21.362293   70020 node_ready.go:53] node "kindnet-447054" has status "Ready":"False"
	I0924 01:26:20.653811   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:19.778477   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:26:19.802113   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:26:19.828427   70464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:26:19.854641   70464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:26:19.873043   70464 ssh_runner.go:195] Run: openssl version
	I0924 01:26:19.878876   70464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:26:19.890468   70464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:26:19.895197   70464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:26:19.895259   70464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:26:19.901025   70464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:26:19.911808   70464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:26:19.922838   70464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:26:19.927622   70464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:26:19.927673   70464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:26:19.933339   70464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:26:19.944077   70464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:26:19.954955   70464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:26:19.959780   70464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:26:19.959842   70464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:26:19.965967   70464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:26:19.977348   70464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:26:19.981998   70464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:26:19.987914   70464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:26:19.993769   70464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:26:19.999992   70464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:26:20.005719   70464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:26:20.011446   70464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:26:20.017134   70464 kubeadm.go:392] StartCluster: {Name:newest-cni-185978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-185978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:26:20.017250   70464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:26:20.017310   70464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:26:20.058868   70464 cri.go:89] found id: ""
	I0924 01:26:20.058939   70464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:26:20.069476   70464 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:26:20.069498   70464 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:26:20.069539   70464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:26:20.079703   70464 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:26:20.080954   70464 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-185978" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:26:20.081720   70464 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-185978" cluster setting kubeconfig missing "newest-cni-185978" context setting]
	I0924 01:26:20.082601   70464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:26:20.084468   70464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:26:20.094120   70464 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.50
	I0924 01:26:20.094149   70464 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:26:20.094160   70464 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:26:20.094214   70464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:26:20.135617   70464 cri.go:89] found id: ""
	I0924 01:26:20.135701   70464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:26:20.151646   70464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:26:20.162060   70464 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:26:20.162076   70464 kubeadm.go:157] found existing configuration files:
	
	I0924 01:26:20.162124   70464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:26:20.170662   70464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:26:20.170718   70464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:26:20.179874   70464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:26:20.188389   70464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:26:20.188453   70464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:26:20.197107   70464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:26:20.205635   70464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:26:20.205706   70464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:26:20.214283   70464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:26:20.223106   70464 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:26:20.223157   70464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:26:20.232645   70464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:26:20.242016   70464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:26:20.351001   70464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:26:21.887774   70464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.536740006s)
	I0924 01:26:21.887806   70464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:26:22.092431   70464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:26:22.161437   70464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:26:22.255094   70464 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:26:22.255182   70464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:26:22.755657   70464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:26:23.255572   70464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:26:23.755861   70464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:26:24.255526   70464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:26:24.274063   70464 api_server.go:72] duration metric: took 2.018964581s to wait for apiserver process to appear ...
	I0924 01:26:24.274098   70464 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:26:24.274122   70464 api_server.go:253] Checking apiserver healthz at https://192.168.72.50:8443/healthz ...
	I0924 01:26:23.662569   69667 pod_ready.go:103] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:26:24.153002   69667 pod_ready.go:93] pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:24.153036   69667 pod_ready.go:82] duration metric: took 35.006640786s for pod "coredns-7c65d6cfc9-4kxwr" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.153051   69667 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-swk8g" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.155403   69667 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-swk8g" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-swk8g" not found
	I0924 01:26:24.155423   69667 pod_ready.go:82] duration metric: took 2.363837ms for pod "coredns-7c65d6cfc9-swk8g" in "kube-system" namespace to be "Ready" ...
	E0924 01:26:24.155433   69667 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-swk8g" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-swk8g" not found
	I0924 01:26:24.155439   69667 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.161690   69667 pod_ready.go:93] pod "etcd-auto-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:24.161718   69667 pod_ready.go:82] duration metric: took 6.272363ms for pod "etcd-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.161731   69667 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.168135   69667 pod_ready.go:93] pod "kube-apiserver-auto-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:24.168167   69667 pod_ready.go:82] duration metric: took 6.427004ms for pod "kube-apiserver-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.168187   69667 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.173820   69667 pod_ready.go:93] pod "kube-controller-manager-auto-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:24.173844   69667 pod_ready.go:82] duration metric: took 5.649481ms for pod "kube-controller-manager-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.173857   69667 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-npgkn" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.461963   69667 pod_ready.go:93] pod "kube-proxy-npgkn" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:24.461992   69667 pod_ready.go:82] duration metric: took 288.126557ms for pod "kube-proxy-npgkn" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.462007   69667 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.863273   69667 pod_ready.go:93] pod "kube-scheduler-auto-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:24.863300   69667 pod_ready.go:82] duration metric: took 401.283703ms for pod "kube-scheduler-auto-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:24.863311   69667 pod_ready.go:39] duration metric: took 35.746394687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:26:24.863331   69667 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:26:24.863393   69667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:26:24.881776   69667 api_server.go:72] duration metric: took 36.081809722s to wait for apiserver process to appear ...
	I0924 01:26:24.881805   69667 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:26:24.881827   69667 api_server.go:253] Checking apiserver healthz at https://192.168.50.23:8443/healthz ...
	I0924 01:26:24.887077   69667 api_server.go:279] https://192.168.50.23:8443/healthz returned 200:
	ok
	I0924 01:26:24.888163   69667 api_server.go:141] control plane version: v1.31.1
	I0924 01:26:24.888187   69667 api_server.go:131] duration metric: took 6.374137ms to wait for apiserver health ...
	I0924 01:26:24.888196   69667 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:26:25.069311   69667 system_pods.go:59] 7 kube-system pods found
	I0924 01:26:25.069353   69667 system_pods.go:61] "coredns-7c65d6cfc9-4kxwr" [6e5d9c21-22e1-4a7b-8f79-3245c402e674] Running
	I0924 01:26:25.069367   69667 system_pods.go:61] "etcd-auto-447054" [993af7cf-2a27-4740-aaea-9d986d657759] Running
	I0924 01:26:25.069373   69667 system_pods.go:61] "kube-apiserver-auto-447054" [32f6b35d-cd1d-40be-bd7c-860622e098eb] Running
	I0924 01:26:25.069379   69667 system_pods.go:61] "kube-controller-manager-auto-447054" [ed8aec47-9013-45c1-9108-909923e6f269] Running
	I0924 01:26:25.069384   69667 system_pods.go:61] "kube-proxy-npgkn" [9fd8ad24-8652-46d1-b393-dfbe9cafdf93] Running
	I0924 01:26:25.069389   69667 system_pods.go:61] "kube-scheduler-auto-447054" [2857b845-9eb7-4272-95b5-babb96ed5f4e] Running
	I0924 01:26:25.069398   69667 system_pods.go:61] "storage-provisioner" [d28b6e56-af96-4998-b256-6482b4b6afdd] Running
	I0924 01:26:25.069406   69667 system_pods.go:74] duration metric: took 181.202331ms to wait for pod list to return data ...
	I0924 01:26:25.069414   69667 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:26:25.263559   69667 default_sa.go:45] found service account: "default"
	I0924 01:26:25.263587   69667 default_sa.go:55] duration metric: took 194.16646ms for default service account to be created ...
	I0924 01:26:25.263596   69667 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:26:25.465183   69667 system_pods.go:86] 7 kube-system pods found
	I0924 01:26:25.465219   69667 system_pods.go:89] "coredns-7c65d6cfc9-4kxwr" [6e5d9c21-22e1-4a7b-8f79-3245c402e674] Running
	I0924 01:26:25.465237   69667 system_pods.go:89] "etcd-auto-447054" [993af7cf-2a27-4740-aaea-9d986d657759] Running
	I0924 01:26:25.465243   69667 system_pods.go:89] "kube-apiserver-auto-447054" [32f6b35d-cd1d-40be-bd7c-860622e098eb] Running
	I0924 01:26:25.465249   69667 system_pods.go:89] "kube-controller-manager-auto-447054" [ed8aec47-9013-45c1-9108-909923e6f269] Running
	I0924 01:26:25.465255   69667 system_pods.go:89] "kube-proxy-npgkn" [9fd8ad24-8652-46d1-b393-dfbe9cafdf93] Running
	I0924 01:26:25.465259   69667 system_pods.go:89] "kube-scheduler-auto-447054" [2857b845-9eb7-4272-95b5-babb96ed5f4e] Running
	I0924 01:26:25.465263   69667 system_pods.go:89] "storage-provisioner" [d28b6e56-af96-4998-b256-6482b4b6afdd] Running
	I0924 01:26:25.465271   69667 system_pods.go:126] duration metric: took 201.668709ms to wait for k8s-apps to be running ...
	I0924 01:26:25.465284   69667 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:26:25.465338   69667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:26:25.485703   69667 system_svc.go:56] duration metric: took 20.408386ms WaitForService to wait for kubelet
	I0924 01:26:25.485734   69667 kubeadm.go:582] duration metric: took 36.685774111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:26:25.485749   69667 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:26:25.663308   69667 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:26:25.663357   69667 node_conditions.go:123] node cpu capacity is 2
	I0924 01:26:25.663373   69667 node_conditions.go:105] duration metric: took 177.619305ms to run NodePressure ...
	I0924 01:26:25.663388   69667 start.go:241] waiting for startup goroutines ...
	I0924 01:26:25.663397   69667 start.go:246] waiting for cluster config update ...
	I0924 01:26:25.663411   69667 start.go:255] writing updated cluster config ...
	I0924 01:26:25.663733   69667 ssh_runner.go:195] Run: rm -f paused
	I0924 01:26:25.714552   69667 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:26:25.716591   69667 out.go:177] * Done! kubectl is now configured to use "auto-447054" cluster and "default" namespace by default
	I0924 01:26:23.523799   70020 node_ready.go:53] node "kindnet-447054" has status "Ready":"False"
	I0924 01:26:25.875533   70020 node_ready.go:49] node "kindnet-447054" has status "Ready":"True"
	I0924 01:26:25.875557   70020 node_ready.go:38] duration metric: took 11.517526125s for node "kindnet-447054" to be "Ready" ...
	I0924 01:26:25.875568   70020 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:26:25.887019   70020 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-9fs9b" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.394105   70020 pod_ready.go:93] pod "coredns-7c65d6cfc9-9fs9b" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:27.394129   70020 pod_ready.go:82] duration metric: took 1.507082603s for pod "coredns-7c65d6cfc9-9fs9b" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.394138   70020 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.399117   70020 pod_ready.go:93] pod "etcd-kindnet-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:27.399139   70020 pod_ready.go:82] duration metric: took 4.994795ms for pod "etcd-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.399149   70020 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.404509   70020 pod_ready.go:93] pod "kube-apiserver-kindnet-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:27.404533   70020 pod_ready.go:82] duration metric: took 5.376677ms for pod "kube-apiserver-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.404546   70020 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.409449   70020 pod_ready.go:93] pod "kube-controller-manager-kindnet-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:27.409474   70020 pod_ready.go:82] duration metric: took 4.921138ms for pod "kube-controller-manager-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.409483   70020 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-jpvmx" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.462082   70020 pod_ready.go:93] pod "kube-proxy-jpvmx" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:27.462113   70020 pod_ready.go:82] duration metric: took 52.623461ms for pod "kube-proxy-jpvmx" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.462127   70020 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.861855   70020 pod_ready.go:93] pod "kube-scheduler-kindnet-447054" in "kube-system" namespace has status "Ready":"True"
	I0924 01:26:27.861885   70020 pod_ready.go:82] duration metric: took 399.748664ms for pod "kube-scheduler-kindnet-447054" in "kube-system" namespace to be "Ready" ...
	I0924 01:26:27.861899   70020 pod_ready.go:39] duration metric: took 1.986296382s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:26:27.861925   70020 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:26:27.861990   70020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:26:27.879755   70020 api_server.go:72] duration metric: took 14.396584943s to wait for apiserver process to appear ...
	I0924 01:26:27.879781   70020 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:26:27.879800   70020 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0924 01:26:27.885274   70020 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0924 01:26:27.886501   70020 api_server.go:141] control plane version: v1.31.1
	I0924 01:26:27.886525   70020 api_server.go:131] duration metric: took 6.738374ms to wait for apiserver health ...
	I0924 01:26:27.886533   70020 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:26:28.065771   70020 system_pods.go:59] 8 kube-system pods found
	I0924 01:26:28.065804   70020 system_pods.go:61] "coredns-7c65d6cfc9-9fs9b" [992982ed-1135-4bd5-ad29-af0fa6060437] Running
	I0924 01:26:28.065809   70020 system_pods.go:61] "etcd-kindnet-447054" [df5b6e52-d9a7-48c0-b1f6-e303a6a7f6e7] Running
	I0924 01:26:28.065813   70020 system_pods.go:61] "kindnet-qq9rr" [2160e9e5-753b-4e24-af3f-876ea1abff11] Running
	I0924 01:26:28.065816   70020 system_pods.go:61] "kube-apiserver-kindnet-447054" [69fcf70f-f866-4726-b428-8a391ae7da84] Running
	I0924 01:26:28.065819   70020 system_pods.go:61] "kube-controller-manager-kindnet-447054" [d4e963b8-56f4-4651-b455-80eb9c9b4250] Running
	I0924 01:26:28.065823   70020 system_pods.go:61] "kube-proxy-jpvmx" [f12c96ee-c8cc-4c0a-b738-6b0bf5705d78] Running
	I0924 01:26:28.065827   70020 system_pods.go:61] "kube-scheduler-kindnet-447054" [15a1bdb8-f7bb-4fb4-9c23-6f6717a9ace5] Running
	I0924 01:26:28.065831   70020 system_pods.go:61] "storage-provisioner" [d4ef6a10-0313-4b8c-a878-ae6045a788ee] Running
	I0924 01:26:28.065838   70020 system_pods.go:74] duration metric: took 179.298897ms to wait for pod list to return data ...
	I0924 01:26:28.065848   70020 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:26:28.262096   70020 default_sa.go:45] found service account: "default"
	I0924 01:26:28.262137   70020 default_sa.go:55] duration metric: took 196.281817ms for default service account to be created ...
	I0924 01:26:28.262149   70020 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:26:28.465645   70020 system_pods.go:86] 8 kube-system pods found
	I0924 01:26:28.465684   70020 system_pods.go:89] "coredns-7c65d6cfc9-9fs9b" [992982ed-1135-4bd5-ad29-af0fa6060437] Running
	I0924 01:26:28.465692   70020 system_pods.go:89] "etcd-kindnet-447054" [df5b6e52-d9a7-48c0-b1f6-e303a6a7f6e7] Running
	I0924 01:26:28.465698   70020 system_pods.go:89] "kindnet-qq9rr" [2160e9e5-753b-4e24-af3f-876ea1abff11] Running
	I0924 01:26:28.465703   70020 system_pods.go:89] "kube-apiserver-kindnet-447054" [69fcf70f-f866-4726-b428-8a391ae7da84] Running
	I0924 01:26:28.465709   70020 system_pods.go:89] "kube-controller-manager-kindnet-447054" [d4e963b8-56f4-4651-b455-80eb9c9b4250] Running
	I0924 01:26:28.465715   70020 system_pods.go:89] "kube-proxy-jpvmx" [f12c96ee-c8cc-4c0a-b738-6b0bf5705d78] Running
	I0924 01:26:28.465721   70020 system_pods.go:89] "kube-scheduler-kindnet-447054" [15a1bdb8-f7bb-4fb4-9c23-6f6717a9ace5] Running
	I0924 01:26:28.465726   70020 system_pods.go:89] "storage-provisioner" [d4ef6a10-0313-4b8c-a878-ae6045a788ee] Running
	I0924 01:26:28.465735   70020 system_pods.go:126] duration metric: took 203.578499ms to wait for k8s-apps to be running ...
	I0924 01:26:28.465747   70020 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:26:28.465800   70020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:26:28.487357   70020 system_svc.go:56] duration metric: took 21.601931ms WaitForService to wait for kubelet
	I0924 01:26:28.487387   70020 kubeadm.go:582] duration metric: took 15.004219627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:26:28.487408   70020 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:26:28.663869   70020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:26:28.663914   70020 node_conditions.go:123] node cpu capacity is 2
	I0924 01:26:28.663931   70020 node_conditions.go:105] duration metric: took 176.517318ms to run NodePressure ...
	I0924 01:26:28.663948   70020 start.go:241] waiting for startup goroutines ...
	I0924 01:26:28.663960   70020 start.go:246] waiting for cluster config update ...
	I0924 01:26:28.663975   70020 start.go:255] writing updated cluster config ...
	I0924 01:26:28.664402   70020 ssh_runner.go:195] Run: rm -f paused
	I0924 01:26:28.738767   70020 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:26:28.740710   70020 out.go:177] * Done! kubectl is now configured to use "kindnet-447054" cluster and "default" namespace by default
	I0924 01:26:27.402050   70464 api_server.go:279] https://192.168.72.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:26:27.402080   70464 api_server.go:103] status: https://192.168.72.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:26:27.402093   70464 api_server.go:253] Checking apiserver healthz at https://192.168.72.50:8443/healthz ...
	I0924 01:26:27.485184   70464 api_server.go:279] https://192.168.72.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:26:27.485236   70464 api_server.go:103] status: https://192.168.72.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:26:27.774619   70464 api_server.go:253] Checking apiserver healthz at https://192.168.72.50:8443/healthz ...
	I0924 01:26:27.779516   70464 api_server.go:279] https://192.168.72.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:26:27.779552   70464 api_server.go:103] status: https://192.168.72.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:26:28.274479   70464 api_server.go:253] Checking apiserver healthz at https://192.168.72.50:8443/healthz ...
	I0924 01:26:28.278724   70464 api_server.go:279] https://192.168.72.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:26:28.278754   70464 api_server.go:103] status: https://192.168.72.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:26:28.775050   70464 api_server.go:253] Checking apiserver healthz at https://192.168.72.50:8443/healthz ...
	I0924 01:26:28.782520   70464 api_server.go:279] https://192.168.72.50:8443/healthz returned 200:
	ok
	I0924 01:26:28.793815   70464 api_server.go:141] control plane version: v1.31.1
	I0924 01:26:28.793847   70464 api_server.go:131] duration metric: took 4.519741774s to wait for apiserver health ...
	I0924 01:26:28.793857   70464 cni.go:84] Creating CNI manager for ""
	I0924 01:26:28.793865   70464 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:26:28.795437   70464 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:26:28.796838   70464 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:26:28.837064   70464 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:26:28.867187   70464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:26:28.881247   70464 system_pods.go:59] 8 kube-system pods found
	I0924 01:26:28.881290   70464 system_pods.go:61] "coredns-7c65d6cfc9-n6rqb" [2f971cdf-6520-4bb2-93d5-d55ee8c05b71] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:26:28.881298   70464 system_pods.go:61] "etcd-newest-cni-185978" [f5e7d201-a491-4ecf-8af7-da5534d0f666] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:26:28.881306   70464 system_pods.go:61] "kube-apiserver-newest-cni-185978" [1fb6fa04-f6c3-403d-b776-518bc73ebd58] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:26:28.881312   70464 system_pods.go:61] "kube-controller-manager-newest-cni-185978" [c3320b9c-e62a-4bec-a5c8-8dab59e4e5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:26:28.881318   70464 system_pods.go:61] "kube-proxy-lssxw" [7431df0e-1a5a-4f2b-8d62-3e6e7f3f879b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 01:26:28.881323   70464 system_pods.go:61] "kube-scheduler-newest-cni-185978" [5ed0e0ed-dfbb-4138-84e5-f9efce259c37] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:26:28.881330   70464 system_pods.go:61] "metrics-server-6867b74b74-bkfqs" [8d735409-020e-4d41-be9e-43cd0d671383] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:26:28.881335   70464 system_pods.go:61] "storage-provisioner" [0eb05ceb-5270-4c0f-999e-b6717ef118b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 01:26:28.881340   70464 system_pods.go:74] duration metric: took 14.125287ms to wait for pod list to return data ...
	I0924 01:26:28.881346   70464 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:26:28.887438   70464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:26:28.887466   70464 node_conditions.go:123] node cpu capacity is 2
	I0924 01:26:28.887476   70464 node_conditions.go:105] duration metric: took 6.124932ms to run NodePressure ...
	I0924 01:26:28.887496   70464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:26:29.349371   70464 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:26:29.411815   70464 ops.go:34] apiserver oom_adj: -16
	I0924 01:26:29.411843   70464 kubeadm.go:597] duration metric: took 9.342337861s to restartPrimaryControlPlane
	I0924 01:26:29.411856   70464 kubeadm.go:394] duration metric: took 9.394728129s to StartCluster
	I0924 01:26:29.411877   70464 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:26:29.411959   70464 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:26:29.413734   70464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:26:29.414008   70464 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.50 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:26:29.414166   70464 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:26:29.414250   70464 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-185978"
	I0924 01:26:29.414270   70464 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-185978"
	W0924 01:26:29.414279   70464 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:26:29.414311   70464 host.go:66] Checking if "newest-cni-185978" exists ...
	I0924 01:26:29.414749   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.414783   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.414875   70464 addons.go:69] Setting default-storageclass=true in profile "newest-cni-185978"
	I0924 01:26:29.414924   70464 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-185978"
	I0924 01:26:29.415065   70464 addons.go:69] Setting metrics-server=true in profile "newest-cni-185978"
	I0924 01:26:29.415096   70464 addons.go:234] Setting addon metrics-server=true in "newest-cni-185978"
	W0924 01:26:29.415107   70464 addons.go:243] addon metrics-server should already be in state true
	I0924 01:26:29.415114   70464 addons.go:69] Setting dashboard=true in profile "newest-cni-185978"
	I0924 01:26:29.415166   70464 addons.go:234] Setting addon dashboard=true in "newest-cni-185978"
	W0924 01:26:29.415184   70464 addons.go:243] addon dashboard should already be in state true
	I0924 01:26:29.415145   70464 host.go:66] Checking if "newest-cni-185978" exists ...
	I0924 01:26:29.415218   70464 host.go:66] Checking if "newest-cni-185978" exists ...
	I0924 01:26:29.415397   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.415433   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.415615   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.415643   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.415678   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.415715   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.415973   70464 config.go:182] Loaded profile config "newest-cni-185978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:26:29.419306   70464 out.go:177] * Verifying Kubernetes components...
	I0924 01:26:29.421030   70464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:26:29.439676   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0924 01:26:29.442467   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I0924 01:26:29.442647   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0924 01:26:29.442899   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35233
	I0924 01:26:29.443198   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.443872   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.443897   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.443990   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.444262   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.444390   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.444415   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.445152   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.445203   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.445851   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.445873   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.446017   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.446027   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.446144   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.446154   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.446400   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.446497   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.446538   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.446970   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.447014   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.447583   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.447612   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetState
	I0924 01:26:29.447623   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.451784   70464 addons.go:234] Setting addon default-storageclass=true in "newest-cni-185978"
	W0924 01:26:29.451808   70464 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:26:29.451835   70464 host.go:66] Checking if "newest-cni-185978" exists ...
	I0924 01:26:29.452204   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.452248   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.468384   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0924 01:26:29.470707   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42523
	I0924 01:26:29.473053   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.473139   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.473615   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.473630   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.473746   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.473771   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.474206   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.474219   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.474522   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetState
	I0924 01:26:29.474736   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetState
	I0924 01:26:29.477212   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:29.477300   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:29.479601   70464 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0924 01:26:29.479616   70464 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:26:29.481359   70464 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:26:29.481385   70464 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:26:29.481446   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:29.482944   70464 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0924 01:26:29.484444   70464 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0924 01:26:29.484466   70464 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0924 01:26:29.484490   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:29.485177   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.485497   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:29.485578   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.486098   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:29.486283   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:29.486399   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:29.486718   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:29.488489   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.488849   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:29.488880   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.489213   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:29.489443   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:29.489599   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:29.489770   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:29.499072   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40535
	I0924 01:26:29.499777   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.500338   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.500362   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.500757   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.501005   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetState
	I0924 01:26:29.502618   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:29.502847   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43581
	I0924 01:26:29.503239   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.503787   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.503820   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.504306   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.504883   70464 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:26:29.504929   70464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:26:29.506114   70464 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:26:29.507631   70464 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:26:29.507652   70464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:26:29.507671   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:29.511964   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.512509   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:29.512535   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.512977   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:29.513207   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:29.513391   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:29.513556   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:29.528567   70464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0924 01:26:29.529480   70464 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:26:29.530172   70464 main.go:141] libmachine: Using API Version  1
	I0924 01:26:29.530201   70464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:26:29.530650   70464 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:26:29.530914   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetState
	I0924 01:26:29.532957   70464 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:26:29.533206   70464 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:26:29.533226   70464 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:26:29.533273   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:26:29.536831   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.537178   70464 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:26:04 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:26:29.537208   70464 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:26:29.537321   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:26:29.537460   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:26:29.537607   70464 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:26:29.537737   70464 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:26:29.725090   70464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:26:29.752686   70464 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:26:29.752773   70464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.920398489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141191920360987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d55eb615-24cd-438a-9117-afb4589a5436 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.921110676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d937328e-8ee1-4127-84f4-94981b8daf1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.921181669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d937328e-8ee1-4127-84f4-94981b8daf1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.921467402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d937328e-8ee1-4127-84f4-94981b8daf1c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.969306390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73af4d9f-6bf9-4bf8-8bf0-d37274508e03 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.969411239Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73af4d9f-6bf9-4bf8-8bf0-d37274508e03 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.970929202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a58f499-8df4-4763-a675-49274c3f15ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.971506121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141191971470601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a58f499-8df4-4763-a675-49274c3f15ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.972281384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31d032ad-78d4-41bd-bd70-c746aea2a598 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.972348946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31d032ad-78d4-41bd-bd70-c746aea2a598 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:31 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:31.972554725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31d032ad-78d4-41bd-bd70-c746aea2a598 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.020120053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0605f173-ff2e-437d-99a0-445c9767b025 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.020238490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0605f173-ff2e-437d-99a0-445c9767b025 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.021438896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12e54cc6-d00a-4e30-8793-87cf55e309d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.022031326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141192022007024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12e54cc6-d00a-4e30-8793-87cf55e309d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.022644450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47ba204a-3775-4f32-9f58-20720b252617 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.022710207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47ba204a-3775-4f32-9f58-20720b252617 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.022931386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47ba204a-3775-4f32-9f58-20720b252617 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.069898767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28e02698-60ee-4fac-8281-619d67bc7f10 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.070031422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28e02698-60ee-4fac-8281-619d67bc7f10 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.072108072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c34093cb-2891-49f6-bbc7-15e46ae51df5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.072758798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141192072717513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c34093cb-2891-49f6-bbc7-15e46ae51df5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.073970150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46de7cdd-2a0a-4c0a-a8e4-63bc3e621de7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.074064204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46de7cdd-2a0a-4c0a-a8e4-63bc3e621de7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:26:32 default-k8s-diff-port-465341 crio[715]: time="2024-09-24 01:26:32.074405972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727139900031644597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d6e2df9cf9551e8317006279d4a7af98fddbd031fe31ac663ff5fd1f64e8ca,PodSandboxId:b12edb12d460bce1ab54a2f5f339453bb4643384734c6d41f9ad5e82d4e4a3c6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727139880067836834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2640213-e0c5-4e24-ab47-40ae93cf2dec,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f,PodSandboxId:86c947d9cd97c3f5ea879829e09c82152a248968de28cec304c8c11661c345bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727139876902937596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xxdh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 297fe292-94bf-468d-9e34-089c4a87429b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc,PodSandboxId:1fb37f1fc655d87bc704f8dafaa719213ad4ed13467e59f0ce1ff33ec5f77993,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727139869186919309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf8mp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdef3aea-b
1a8-438b-994f-c3212def9aea,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559,PodSandboxId:f77a2b5b8dc99ddd1fb733288c586382c480f97e54d58009878cfc54644d8c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727139869142722214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b09ad6ef-7517-4de2-a70c
-83876efd804e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7,PodSandboxId:6d64cdd87d3256594767df888a8365e0e40219a467933c6e3fdbc7beda771ffd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727139865490802476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d01994ec812b10b4b6f
0618a626fab,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f,PodSandboxId:22c25c49da19a5d516b484f6cbc6660c499c4fa70216bedc0db7d8a0038f2ef7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727139865507065014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 146f0c671ce4286b89865c4
c32c180fa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2,PodSandboxId:d7121dd08f0893752f0b17dcb0af76a06da336b3d662f56979dd37cb9288837d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727139865459470410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f128f51a989e62ff552186fa70bbf5,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba,PodSandboxId:c14e32efc528ad38562523f3dd3c921227b3245d78f555d61e74bf01f8569273,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727139865473282611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-465341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b513a84f02bd83f80046c0ae57535d
3b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46de7cdd-2a0a-4c0a-a8e4-63bc3e621de7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b621e1c0feb5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   f77a2b5b8dc99       storage-provisioner
	05d6e2df9cf95       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   b12edb12d460b       busybox
	ddbd1006bd609       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   86c947d9cd97c       coredns-7c65d6cfc9-xxdh2
	f31b7aed1cdf7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      22 minutes ago      Running             kube-proxy                1                   1fb37f1fc655d       kube-proxy-nf8mp
	e76f05331da2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   f77a2b5b8dc99       storage-provisioner
	58d05b91989bd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      22 minutes ago      Running             kube-scheduler            1                   22c25c49da19a       kube-scheduler-default-k8s-diff-port-465341
	306da3fd311af       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      22 minutes ago      Running             kube-apiserver            1                   6d64cdd87d325       kube-apiserver-default-k8s-diff-port-465341
	55e01b5780ebe       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      22 minutes ago      Running             kube-controller-manager   1                   c14e32efc528a       kube-controller-manager-default-k8s-diff-port-465341
	2c9f89868c713       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   d7121dd08f089       etcd-default-k8s-diff-port-465341
	
	
	==> coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54002 - 53945 "HINFO IN 8184409097673576607.808292174949133715. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008897981s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-465341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-465341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=default-k8s-diff-port-465341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_57_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:57:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-465341
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 01:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 01:25:21 +0000   Tue, 24 Sep 2024 00:57:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 01:25:21 +0000   Tue, 24 Sep 2024 00:57:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 01:25:21 +0000   Tue, 24 Sep 2024 00:57:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 01:25:21 +0000   Tue, 24 Sep 2024 01:04:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.186
	  Hostname:    default-k8s-diff-port-465341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c05df9f007d4c048ac491600582d36b
	  System UUID:                8c05df9f-007d-4c04-8ac4-91600582d36b
	  Boot ID:                    b433b690-8283-4013-993b-3f29777e81d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-7c65d6cfc9-xxdh2                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-465341                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-465341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-465341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-nf8mp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-465341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-jtx6r                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-465341 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-465341 event: Registered Node default-k8s-diff-port-465341 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-465341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-465341 event: Registered Node default-k8s-diff-port-465341 in Controller
	
	
	==> dmesg <==
	[Sep24 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051499] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037024] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep24 01:04] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.905767] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.545343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.580014] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.071407] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077513] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.177433] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.151348] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.313267] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.122634] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +1.935499] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.070061] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.517159] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.402509] systemd-fstab-generator[1561]: Ignoring "noauto" option for root device
	[  +1.351350] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.268876] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] <==
	{"level":"warn","ts":"2024-09-24T01:05:04.461007Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.816224ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T01:05:04.461032Z","caller":"traceutil/trace.go:171","msg":"trace[679521091] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:614; }","duration":"226.8557ms","start":"2024-09-24T01:05:04.234169Z","end":"2024-09-24T01:05:04.461024Z","steps":["trace[679521091] 'agreement among raft nodes before linearized reading'  (duration: 226.800549ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T01:14:27.088119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":843}
	{"level":"info","ts":"2024-09-24T01:14:27.098988Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":843,"took":"10.447754ms","hash":900529936,"current-db-size-bytes":2703360,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2703360,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-24T01:14:27.099074Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":900529936,"revision":843,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T01:19:27.095419Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1085}
	{"level":"info","ts":"2024-09-24T01:19:27.099924Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1085,"took":"4.158026ms","hash":4226080038,"current-db-size-bytes":2703360,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1691648,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-24T01:19:27.100003Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4226080038,"revision":1085,"compact-revision":843}
	{"level":"info","ts":"2024-09-24T01:24:27.104982Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1329}
	{"level":"info","ts":"2024-09-24T01:24:27.108689Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1329,"took":"3.366384ms","hash":924538201,"current-db-size-bytes":2703360,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-24T01:24:27.108745Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":924538201,"revision":1329,"compact-revision":1085}
	{"level":"info","ts":"2024-09-24T01:25:35.131626Z","caller":"traceutil/trace.go:171","msg":"trace[1140369442] linearizableReadLoop","detail":"{readStateIndex:1919; appliedIndex:1918; }","duration":"297.770092ms","start":"2024-09-24T01:25:34.833826Z","end":"2024-09-24T01:25:35.131596Z","steps":["trace[1140369442] 'read index received'  (duration: 297.604596ms)","trace[1140369442] 'applied index is now lower than readState.Index'  (duration: 165.194µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T01:25:35.132043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.216166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T01:25:35.132106Z","caller":"traceutil/trace.go:171","msg":"trace[169605801] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1628; }","duration":"298.346792ms","start":"2024-09-24T01:25:34.833747Z","end":"2024-09-24T01:25:35.132094Z","steps":["trace[169605801] 'agreement among raft nodes before linearized reading'  (duration: 298.131803ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T01:25:35.132394Z","caller":"traceutil/trace.go:171","msg":"trace[434976384] transaction","detail":"{read_only:false; response_revision:1628; number_of_response:1; }","duration":"348.308556ms","start":"2024-09-24T01:25:34.784045Z","end":"2024-09-24T01:25:35.132353Z","steps":["trace[434976384] 'process raft request'  (duration: 347.439695ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:25:35.133404Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T01:25:34.784025Z","time spent":"349.284361ms","remote":"127.0.0.1:52698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1627 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-24T01:25:59.382036Z","caller":"traceutil/trace.go:171","msg":"trace[1278980408] transaction","detail":"{read_only:false; response_revision:1648; number_of_response:1; }","duration":"121.046617ms","start":"2024-09-24T01:25:59.260636Z","end":"2024-09-24T01:25:59.381683Z","steps":["trace[1278980408] 'process raft request'  (duration: 120.91729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:25:59.578235Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.800901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T01:25:59.578325Z","caller":"traceutil/trace.go:171","msg":"trace[1124192829] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:1648; }","duration":"122.934125ms","start":"2024-09-24T01:25:59.455371Z","end":"2024-09-24T01:25:59.578305Z","steps":["trace[1124192829] 'count revisions from in-memory index tree'  (duration: 122.693938ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T01:26:01.516910Z","caller":"traceutil/trace.go:171","msg":"trace[1352930076] transaction","detail":"{read_only:false; response_revision:1649; number_of_response:1; }","duration":"125.754894ms","start":"2024-09-24T01:26:01.391141Z","end":"2024-09-24T01:26:01.516896Z","steps":["trace[1352930076] 'process raft request'  (duration: 125.6498ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T01:26:23.700860Z","caller":"traceutil/trace.go:171","msg":"trace[1983041546] linearizableReadLoop","detail":"{readStateIndex:1969; appliedIndex:1968; }","duration":"229.175211ms","start":"2024-09-24T01:26:23.471650Z","end":"2024-09-24T01:26:23.700825Z","steps":["trace[1983041546] 'read index received'  (duration: 228.980167ms)","trace[1983041546] 'applied index is now lower than readState.Index'  (duration: 194.295µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T01:26:23.701143Z","caller":"traceutil/trace.go:171","msg":"trace[1293115481] transaction","detail":"{read_only:false; response_revision:1668; number_of_response:1; }","duration":"306.083586ms","start":"2024-09-24T01:26:23.395046Z","end":"2024-09-24T01:26:23.701130Z","steps":["trace[1293115481] 'process raft request'  (duration: 305.632605ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:26:23.701253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.531897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-24T01:26:23.701367Z","caller":"traceutil/trace.go:171","msg":"trace[1753980491] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1668; }","duration":"229.723411ms","start":"2024-09-24T01:26:23.471629Z","end":"2024-09-24T01:26:23.701352Z","steps":["trace[1753980491] 'agreement among raft nodes before linearized reading'  (duration: 229.502299ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T01:26:23.701260Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-24T01:26:23.395031Z","time spent":"306.163471ms","remote":"127.0.0.1:52800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-465341\" mod_revision:1660 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-465341\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-465341\" > >"}
	
	
	==> kernel <==
	 01:26:32 up 22 min,  0 users,  load average: 0.07, 0.11, 0.11
	Linux default-k8s-diff-port-465341 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] <==
	I0924 01:22:29.365419       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:22:29.365432       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:24:28.362551       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:24:28.362923       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 01:24:29.364673       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 01:24:29.364687       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:24:29.364869       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 01:24:29.364951       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:24:29.366019       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:24:29.366070       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:25:29.366561       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:25:29.366648       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 01:25:29.366723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:25:29.366815       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:25:29.367930       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:25:29.368022       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] <==
	E0924 01:21:04.086665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:21:04.564657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:21:34.094392       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:21:34.573122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:22:04.100269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:22:04.583833       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:22:34.108953       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:22:34.592329       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:23:04.117161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:23:04.599533       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:23:34.128312       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:23:34.609634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:24:04.135099       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:24:04.619971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:24:34.141384       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:24:34.627943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:25:04.152988       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:25:04.639544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:25:21.089222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-465341"
	E0924 01:25:34.159499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:25:34.648991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:25:57.832807       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="217.28µs"
	E0924 01:26:04.167086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:26:04.658157       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:26:10.828617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="172.784µs"
	
	
	==> kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 01:04:29.420543       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 01:04:29.429430       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.186"]
	E0924 01:04:29.429635       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 01:04:29.473443       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 01:04:29.473488       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 01:04:29.473512       1 server_linux.go:169] "Using iptables Proxier"
	I0924 01:04:29.475745       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 01:04:29.476194       1 server.go:483] "Version info" version="v1.31.1"
	I0924 01:04:29.476219       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:04:29.477696       1 config.go:199] "Starting service config controller"
	I0924 01:04:29.477736       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 01:04:29.477759       1 config.go:105] "Starting endpoint slice config controller"
	I0924 01:04:29.477795       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 01:04:29.478336       1 config.go:328] "Starting node config controller"
	I0924 01:04:29.478358       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 01:04:29.578303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 01:04:29.578415       1 shared_informer.go:320] Caches are synced for service config
	I0924 01:04:29.578703       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] <==
	I0924 01:04:26.452076       1 serving.go:386] Generated self-signed cert in-memory
	W0924 01:04:28.303437       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 01:04:28.303649       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 01:04:28.303715       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 01:04:28.303744       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 01:04:28.401171       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 01:04:28.401280       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:04:28.407036       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 01:04:28.407176       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 01:04:28.408155       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 01:04:28.407232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 01:04:28.509662       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 01:25:27 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:27.811529     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:25:34 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:34.093475     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141134093118236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:34 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:34.093902     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141134093118236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:42 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:42.831724     928 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 24 01:25:42 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:42.831859     928 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 24 01:25:42 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:42.832228     928 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75cms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-jtx6r_kube-system(d83599a7-f77d-4fbb-b76f-67d33c60b4a6): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 24 01:25:42 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:42.833929     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:25:44 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:44.096546     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141144096044355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:44 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:44.097115     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141144096044355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:54 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:54.099355     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141154098419051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:54 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:54.099828     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141154098419051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:57 default-k8s-diff-port-465341 kubelet[928]: E0924 01:25:57.811645     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:26:04 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:04.103471     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141164102670912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:26:04 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:04.103540     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141164102670912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:26:10 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:10.810095     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	Sep 24 01:26:14 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:14.107037     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141174106344959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:26:14 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:14.107410     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141174106344959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:26:23 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:23.826749     928 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 01:26:23 default-k8s-diff-port-465341 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 01:26:23 default-k8s-diff-port-465341 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 01:26:23 default-k8s-diff-port-465341 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 01:26:23 default-k8s-diff-port-465341 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 01:26:24 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:24.110960     928 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141184110317374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:26:24 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:24.111015     928 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141184110317374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:26:25 default-k8s-diff-port-465341 kubelet[928]: E0924 01:26:25.812312     928 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jtx6r" podUID="d83599a7-f77d-4fbb-b76f-67d33c60b4a6"
	
	
	==> storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] <==
	I0924 01:05:00.141302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 01:05:00.151479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 01:05:00.152326       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 01:05:00.165372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 01:05:00.165630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-465341_53a896ee-5b4c-4683-8f2e-a9fa6b1638d4!
	I0924 01:05:00.166965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58543f7e-6980-4184-8e2e-1690eb4b49fa", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-465341_53a896ee-5b4c-4683-8f2e-a9fa6b1638d4 became leader
	I0924 01:05:00.266450       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-465341_53a896ee-5b4c-4683-8f2e-a9fa6b1638d4!
	
	
	==> storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] <==
	I0924 01:04:29.231639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0924 01:04:59.234591       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jtx6r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 describe pod metrics-server-6867b74b74-jtx6r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-465341 describe pod metrics-server-6867b74b74-jtx6r: exit status 1 (75.684355ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jtx6r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-465341 describe pod metrics-server-6867b74b74-jtx6r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (511.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-650507 -n embed-certs-650507
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-24 01:25:09.286453417 +0000 UTC m=+6450.717535861
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-650507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-650507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.683µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-650507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-650507 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-650507 logs -n 25: (1.41081609s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:24 UTC | 24 Sep 24 01:24 UTC |
	| start   | -p newest-cni-185978 --memory=2200 --alsologtostderr   | newest-cni-185978            | jenkins | v1.34.0 | 24 Sep 24 01:24 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC | 24 Sep 24 01:25 UTC |
	| start   | -p auto-447054 --memory=3072                           | auto-447054                  | jenkins | v1.34.0 | 24 Sep 24 01:25 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:25:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:25:03.538025   69667 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:25:03.538301   69667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:25:03.538313   69667 out.go:358] Setting ErrFile to fd 2...
	I0924 01:25:03.538319   69667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:25:03.538542   69667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:25:03.539118   69667 out.go:352] Setting JSON to false
	I0924 01:25:03.540025   69667 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7648,"bootTime":1727133456,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:25:03.540125   69667 start.go:139] virtualization: kvm guest
	I0924 01:25:03.542437   69667 out.go:177] * [auto-447054] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:25:03.544384   69667 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:25:03.544380   69667 notify.go:220] Checking for updates...
	I0924 01:25:03.547147   69667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:25:03.548535   69667 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:25:03.549893   69667 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:25:03.551136   69667 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:25:03.552236   69667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:25:03.553811   69667 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:25:03.553903   69667 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:25:03.553990   69667 config.go:182] Loaded profile config "newest-cni-185978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:25:03.554077   69667 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:25:03.592244   69667 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 01:25:03.593388   69667 start.go:297] selected driver: kvm2
	I0924 01:25:03.593404   69667 start.go:901] validating driver "kvm2" against <nil>
	I0924 01:25:03.593416   69667 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:25:03.594236   69667 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:25:03.594332   69667 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:25:03.610052   69667 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:25:03.610099   69667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 01:25:03.610365   69667 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:25:03.610400   69667 cni.go:84] Creating CNI manager for ""
	I0924 01:25:03.610462   69667 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:25:03.610477   69667 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 01:25:03.610554   69667 start.go:340] cluster config:
	{Name:auto-447054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-447054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:25:03.610668   69667 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:25:03.612593   69667 out.go:177] * Starting "auto-447054" primary control-plane node in "auto-447054" cluster
	I0924 01:25:03.613628   69667 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:25:03.613662   69667 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 01:25:03.613672   69667 cache.go:56] Caching tarball of preloaded images
	I0924 01:25:03.613763   69667 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:25:03.613777   69667 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 01:25:03.613872   69667 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/auto-447054/config.json ...
	I0924 01:25:03.613896   69667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/auto-447054/config.json: {Name:mk658b8eaf9df52dcf0221aeefd0d1e912881e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:25:03.614081   69667 start.go:360] acquireMachinesLock for auto-447054: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:25:04.709180   69667 start.go:364] duration metric: took 1.095056144s to acquireMachinesLock for "auto-447054"
	I0924 01:25:04.709250   69667 start.go:93] Provisioning new machine with config: &{Name:auto-447054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-447054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:25:04.709396   69667 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 01:25:03.285643   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.286234   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has current primary IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.286259   69197 main.go:141] libmachine: (newest-cni-185978) Found IP for machine: 192.168.72.50
	I0924 01:25:03.286273   69197 main.go:141] libmachine: (newest-cni-185978) Reserving static IP address...
	I0924 01:25:03.286690   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find host DHCP lease matching {name: "newest-cni-185978", mac: "52:54:00:fa:98:80", ip: "192.168.72.50"} in network mk-newest-cni-185978
	I0924 01:25:03.366032   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Getting to WaitForSSH function...
	I0924 01:25:03.366062   69197 main.go:141] libmachine: (newest-cni-185978) Reserved static IP address: 192.168.72.50
	I0924 01:25:03.366075   69197 main.go:141] libmachine: (newest-cni-185978) Waiting for SSH to be available...
	I0924 01:25:03.369216   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.369660   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:03.369687   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.369854   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Using SSH client type: external
	I0924 01:25:03.369876   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa (-rw-------)
	I0924 01:25:03.369926   69197 main.go:141] libmachine: (newest-cni-185978) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:25:03.369958   69197 main.go:141] libmachine: (newest-cni-185978) DBG | About to run SSH command:
	I0924 01:25:03.369978   69197 main.go:141] libmachine: (newest-cni-185978) DBG | exit 0
	I0924 01:25:03.500714   69197 main.go:141] libmachine: (newest-cni-185978) DBG | SSH cmd err, output: <nil>: 
	I0924 01:25:03.501031   69197 main.go:141] libmachine: (newest-cni-185978) KVM machine creation complete!
	I0924 01:25:03.501421   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetConfigRaw
	I0924 01:25:03.501990   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:03.502198   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:03.502383   69197 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0924 01:25:03.502395   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetState
	I0924 01:25:03.503869   69197 main.go:141] libmachine: Detecting operating system of created instance...
	I0924 01:25:03.503888   69197 main.go:141] libmachine: Waiting for SSH to be available...
	I0924 01:25:03.503893   69197 main.go:141] libmachine: Getting to WaitForSSH function...
	I0924 01:25:03.503899   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:03.506954   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.507503   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:03.507546   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.507744   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:03.507916   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.508041   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.508189   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:03.508414   69197 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:03.508608   69197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:25:03.508621   69197 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0924 01:25:03.611828   69197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:25:03.611848   69197 main.go:141] libmachine: Detecting the provisioner...
	I0924 01:25:03.611858   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:03.614755   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.615139   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:03.615164   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.615362   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:03.615543   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.615703   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.615855   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:03.616028   69197 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:03.616189   69197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:25:03.616200   69197 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0924 01:25:03.716999   69197 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0924 01:25:03.717097   69197 main.go:141] libmachine: found compatible host: buildroot
	I0924 01:25:03.717112   69197 main.go:141] libmachine: Provisioning with buildroot...
	I0924 01:25:03.717127   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetMachineName
	I0924 01:25:03.717434   69197 buildroot.go:166] provisioning hostname "newest-cni-185978"
	I0924 01:25:03.717466   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetMachineName
	I0924 01:25:03.717714   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:03.720829   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.721207   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:03.721254   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.721454   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:03.721637   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.721789   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.721945   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:03.722084   69197 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:03.722307   69197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:25:03.722326   69197 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-185978 && echo "newest-cni-185978" | sudo tee /etc/hostname
	I0924 01:25:03.841541   69197 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-185978
	
	I0924 01:25:03.841576   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:03.844679   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.845127   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:03.845166   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.845320   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:03.845521   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.845741   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:03.845873   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:03.846045   69197 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:03.846217   69197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:25:03.846233   69197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-185978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-185978/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-185978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:25:03.960570   69197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:25:03.960601   69197 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:25:03.960623   69197 buildroot.go:174] setting up certificates
	I0924 01:25:03.960634   69197 provision.go:84] configureAuth start
	I0924 01:25:03.960648   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetMachineName
	I0924 01:25:03.960954   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:25:03.964297   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.964767   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:03.964798   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.964993   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:03.967592   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.967935   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:03.967961   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:03.968094   69197 provision.go:143] copyHostCerts
	I0924 01:25:03.968166   69197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:25:03.968181   69197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:25:03.968273   69197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:25:03.968425   69197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:25:03.968438   69197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:25:03.968476   69197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:25:03.968552   69197 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:25:03.968563   69197 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:25:03.968595   69197 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:25:03.968661   69197 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.newest-cni-185978 san=[127.0.0.1 192.168.72.50 localhost minikube newest-cni-185978]
	I0924 01:25:04.081292   69197 provision.go:177] copyRemoteCerts
	I0924 01:25:04.081345   69197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:25:04.081367   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:04.084789   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.085159   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.085207   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.085400   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:04.085624   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.085761   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:04.085904   69197 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:25:04.166103   69197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:25:04.193695   69197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:25:04.218052   69197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:25:04.241633   69197 provision.go:87] duration metric: took 280.985134ms to configureAuth
	I0924 01:25:04.241665   69197 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:25:04.241871   69197 config.go:182] Loaded profile config "newest-cni-185978": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:25:04.241963   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:04.244417   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.244763   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.244799   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.244911   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:04.245103   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.245282   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.245416   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:04.245575   69197 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:04.245729   69197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:25:04.245743   69197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:25:04.468052   69197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:25:04.468080   69197 main.go:141] libmachine: Checking connection to Docker...
	I0924 01:25:04.468089   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetURL
	I0924 01:25:04.469476   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Using libvirt version 6000000
	I0924 01:25:04.472185   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.472588   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.472614   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.472837   69197 main.go:141] libmachine: Docker is up and running!
	I0924 01:25:04.472852   69197 main.go:141] libmachine: Reticulating splines...
	I0924 01:25:04.472859   69197 client.go:171] duration metric: took 24.119714311s to LocalClient.Create
	I0924 01:25:04.472883   69197 start.go:167] duration metric: took 24.119780624s to libmachine.API.Create "newest-cni-185978"
	I0924 01:25:04.472894   69197 start.go:293] postStartSetup for "newest-cni-185978" (driver="kvm2")
	I0924 01:25:04.472911   69197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:25:04.472935   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:04.473221   69197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:25:04.473250   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:04.475677   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.476019   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.476045   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.476216   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:04.476416   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.476577   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:04.476776   69197 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:25:04.558662   69197 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:25:04.562655   69197 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:25:04.562679   69197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:25:04.562737   69197 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:25:04.562816   69197 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:25:04.562916   69197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:25:04.572036   69197 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:25:04.597443   69197 start.go:296] duration metric: took 124.529484ms for postStartSetup
	I0924 01:25:04.597499   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetConfigRaw
	I0924 01:25:04.598120   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:25:04.601109   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.601525   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.601556   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.601819   69197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/config.json ...
	I0924 01:25:04.602016   69197 start.go:128] duration metric: took 24.267626165s to createHost
	I0924 01:25:04.602037   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:04.604584   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.605030   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.605055   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.605222   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:04.605466   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.605650   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.605816   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:04.606083   69197 main.go:141] libmachine: Using SSH client type: native
	I0924 01:25:04.606280   69197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.50 22 <nil> <nil>}
	I0924 01:25:04.606292   69197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:25:04.709027   69197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727141104.678509237
	
	I0924 01:25:04.709050   69197 fix.go:216] guest clock: 1727141104.678509237
	I0924 01:25:04.709057   69197 fix.go:229] Guest: 2024-09-24 01:25:04.678509237 +0000 UTC Remote: 2024-09-24 01:25:04.602027533 +0000 UTC m=+24.382634879 (delta=76.481704ms)
	I0924 01:25:04.709075   69197 fix.go:200] guest clock delta is within tolerance: 76.481704ms
	I0924 01:25:04.709080   69197 start.go:83] releasing machines lock for "newest-cni-185978", held for 24.374760485s
	I0924 01:25:04.709116   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:04.709421   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:25:04.713017   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.713407   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.713438   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.713608   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:04.714229   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:04.714421   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:25:04.714531   69197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:25:04.714590   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:04.714636   69197 ssh_runner.go:195] Run: cat /version.json
	I0924 01:25:04.714679   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHHostname
	I0924 01:25:04.717494   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.717753   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.717870   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.718043   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.718089   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:04.718339   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.718404   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:04.718421   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:04.718451   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHPort
	I0924 01:25:04.718497   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:04.718622   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHKeyPath
	I0924 01:25:04.718639   69197 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:25:04.718818   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetSSHUsername
	I0924 01:25:04.718963   69197 sshutil.go:53] new ssh client: &{IP:192.168.72.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa Username:docker}
	I0924 01:25:04.833838   69197 ssh_runner.go:195] Run: systemctl --version
	I0924 01:25:04.840016   69197 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:25:05.004225   69197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:25:05.012194   69197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:25:05.012276   69197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:25:05.029027   69197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:25:05.029064   69197 start.go:495] detecting cgroup driver to use...
	I0924 01:25:05.029148   69197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:25:05.046231   69197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:25:05.067278   69197 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:25:05.067342   69197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:25:05.087661   69197 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:25:05.107516   69197 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:25:05.250560   69197 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:25:05.389421   69197 docker.go:233] disabling docker service ...
	I0924 01:25:05.389511   69197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:25:05.406887   69197 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:25:05.420564   69197 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:25:05.567444   69197 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:25:05.709014   69197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:25:05.725275   69197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:25:05.745254   69197 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:25:05.745336   69197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:05.756538   69197 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:25:05.756604   69197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:05.767306   69197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:05.777574   69197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:05.787860   69197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:25:05.798104   69197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:05.809028   69197 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:05.827509   69197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:25:05.837721   69197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:25:05.851368   69197 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:25:05.851495   69197 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:25:05.868391   69197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:25:05.879515   69197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:25:06.009626   69197 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:25:06.109464   69197 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:25:06.109629   69197 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:25:06.114671   69197 start.go:563] Will wait 60s for crictl version
	I0924 01:25:06.114740   69197 ssh_runner.go:195] Run: which crictl
	I0924 01:25:06.118669   69197 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:25:06.157402   69197 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:25:06.157492   69197 ssh_runner.go:195] Run: crio --version
	I0924 01:25:06.184937   69197 ssh_runner.go:195] Run: crio --version
	I0924 01:25:06.220793   69197 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:25:06.222465   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetIP
	I0924 01:25:06.227753   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:06.228356   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:98:80", ip: ""} in network mk-newest-cni-185978: {Iface:virbr3 ExpiryTime:2024-09-24 02:24:54 +0000 UTC Type:0 Mac:52:54:00:fa:98:80 Iaid: IPaddr:192.168.72.50 Prefix:24 Hostname:newest-cni-185978 Clientid:01:52:54:00:fa:98:80}
	I0924 01:25:06.228400   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined IP address 192.168.72.50 and MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:25:06.228657   69197 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0924 01:25:06.233976   69197 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:25:06.250779   69197 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0924 01:25:04.711843   69667 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 01:25:04.712087   69667 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:25:04.712140   69667 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:25:04.733161   69667 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0924 01:25:04.733744   69667 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:25:04.734466   69667 main.go:141] libmachine: Using API Version  1
	I0924 01:25:04.734492   69667 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:25:04.734948   69667 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:25:04.735180   69667 main.go:141] libmachine: (auto-447054) Calling .GetMachineName
	I0924 01:25:04.735361   69667 main.go:141] libmachine: (auto-447054) Calling .DriverName
	I0924 01:25:04.735527   69667 start.go:159] libmachine.API.Create for "auto-447054" (driver="kvm2")
	I0924 01:25:04.735565   69667 client.go:168] LocalClient.Create starting
	I0924 01:25:04.735607   69667 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 01:25:04.735656   69667 main.go:141] libmachine: Decoding PEM data...
	I0924 01:25:04.735681   69667 main.go:141] libmachine: Parsing certificate...
	I0924 01:25:04.735749   69667 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 01:25:04.735775   69667 main.go:141] libmachine: Decoding PEM data...
	I0924 01:25:04.735802   69667 main.go:141] libmachine: Parsing certificate...
	I0924 01:25:04.735824   69667 main.go:141] libmachine: Running pre-create checks...
	I0924 01:25:04.735841   69667 main.go:141] libmachine: (auto-447054) Calling .PreCreateCheck
	I0924 01:25:04.736314   69667 main.go:141] libmachine: (auto-447054) Calling .GetConfigRaw
	I0924 01:25:04.736811   69667 main.go:141] libmachine: Creating machine...
	I0924 01:25:04.736825   69667 main.go:141] libmachine: (auto-447054) Calling .Create
	I0924 01:25:04.736986   69667 main.go:141] libmachine: (auto-447054) Creating KVM machine...
	I0924 01:25:04.738486   69667 main.go:141] libmachine: (auto-447054) DBG | found existing default KVM network
	I0924 01:25:04.740235   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:04.740012   69690 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:99:1c} reservation:<nil>}
	I0924 01:25:04.742174   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:04.742066   69690 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000154d0}
	I0924 01:25:04.742210   69667 main.go:141] libmachine: (auto-447054) DBG | created network xml: 
	I0924 01:25:04.742231   69667 main.go:141] libmachine: (auto-447054) DBG | <network>
	I0924 01:25:04.742247   69667 main.go:141] libmachine: (auto-447054) DBG |   <name>mk-auto-447054</name>
	I0924 01:25:04.742260   69667 main.go:141] libmachine: (auto-447054) DBG |   <dns enable='no'/>
	I0924 01:25:04.742273   69667 main.go:141] libmachine: (auto-447054) DBG |   
	I0924 01:25:04.742286   69667 main.go:141] libmachine: (auto-447054) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0924 01:25:04.742297   69667 main.go:141] libmachine: (auto-447054) DBG |     <dhcp>
	I0924 01:25:04.742308   69667 main.go:141] libmachine: (auto-447054) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0924 01:25:04.742318   69667 main.go:141] libmachine: (auto-447054) DBG |     </dhcp>
	I0924 01:25:04.742323   69667 main.go:141] libmachine: (auto-447054) DBG |   </ip>
	I0924 01:25:04.742331   69667 main.go:141] libmachine: (auto-447054) DBG |   
	I0924 01:25:04.742336   69667 main.go:141] libmachine: (auto-447054) DBG | </network>
	I0924 01:25:04.742343   69667 main.go:141] libmachine: (auto-447054) DBG | 
	I0924 01:25:04.748169   69667 main.go:141] libmachine: (auto-447054) DBG | trying to create private KVM network mk-auto-447054 192.168.50.0/24...
	I0924 01:25:04.825566   69667 main.go:141] libmachine: (auto-447054) DBG | private KVM network mk-auto-447054 192.168.50.0/24 created
	I0924 01:25:04.825648   69667 main.go:141] libmachine: (auto-447054) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054 ...
	I0924 01:25:04.825679   69667 main.go:141] libmachine: (auto-447054) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 01:25:04.825717   69667 main.go:141] libmachine: (auto-447054) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 01:25:04.825758   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:04.825513   69690 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:25:05.102034   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:05.101833   69690 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054/id_rsa...
	I0924 01:25:05.218969   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:05.218822   69690 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054/auto-447054.rawdisk...
	I0924 01:25:05.219000   69667 main.go:141] libmachine: (auto-447054) DBG | Writing magic tar header
	I0924 01:25:05.219010   69667 main.go:141] libmachine: (auto-447054) DBG | Writing SSH key tar header
	I0924 01:25:05.219017   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:05.218960   69690 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054 ...
	I0924 01:25:05.219143   69667 main.go:141] libmachine: (auto-447054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054
	I0924 01:25:05.219164   69667 main.go:141] libmachine: (auto-447054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 01:25:05.219176   69667 main.go:141] libmachine: (auto-447054) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054 (perms=drwx------)
	I0924 01:25:05.219187   69667 main.go:141] libmachine: (auto-447054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:25:05.219207   69667 main.go:141] libmachine: (auto-447054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 01:25:05.219220   69667 main.go:141] libmachine: (auto-447054) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 01:25:05.219234   69667 main.go:141] libmachine: (auto-447054) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 01:25:05.219251   69667 main.go:141] libmachine: (auto-447054) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 01:25:05.219268   69667 main.go:141] libmachine: (auto-447054) DBG | Checking permissions on dir: /home/jenkins
	I0924 01:25:05.219282   69667 main.go:141] libmachine: (auto-447054) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 01:25:05.219305   69667 main.go:141] libmachine: (auto-447054) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 01:25:05.219317   69667 main.go:141] libmachine: (auto-447054) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 01:25:05.219330   69667 main.go:141] libmachine: (auto-447054) Creating domain...
	I0924 01:25:05.219344   69667 main.go:141] libmachine: (auto-447054) DBG | Checking permissions on dir: /home
	I0924 01:25:05.219355   69667 main.go:141] libmachine: (auto-447054) DBG | Skipping /home - not owner
	I0924 01:25:05.220649   69667 main.go:141] libmachine: (auto-447054) define libvirt domain using xml: 
	I0924 01:25:05.220679   69667 main.go:141] libmachine: (auto-447054) <domain type='kvm'>
	I0924 01:25:05.220689   69667 main.go:141] libmachine: (auto-447054)   <name>auto-447054</name>
	I0924 01:25:05.220698   69667 main.go:141] libmachine: (auto-447054)   <memory unit='MiB'>3072</memory>
	I0924 01:25:05.220706   69667 main.go:141] libmachine: (auto-447054)   <vcpu>2</vcpu>
	I0924 01:25:05.220716   69667 main.go:141] libmachine: (auto-447054)   <features>
	I0924 01:25:05.220724   69667 main.go:141] libmachine: (auto-447054)     <acpi/>
	I0924 01:25:05.220734   69667 main.go:141] libmachine: (auto-447054)     <apic/>
	I0924 01:25:05.220787   69667 main.go:141] libmachine: (auto-447054)     <pae/>
	I0924 01:25:05.220808   69667 main.go:141] libmachine: (auto-447054)     
	I0924 01:25:05.220819   69667 main.go:141] libmachine: (auto-447054)   </features>
	I0924 01:25:05.220829   69667 main.go:141] libmachine: (auto-447054)   <cpu mode='host-passthrough'>
	I0924 01:25:05.220853   69667 main.go:141] libmachine: (auto-447054)   
	I0924 01:25:05.220872   69667 main.go:141] libmachine: (auto-447054)   </cpu>
	I0924 01:25:05.220883   69667 main.go:141] libmachine: (auto-447054)   <os>
	I0924 01:25:05.220891   69667 main.go:141] libmachine: (auto-447054)     <type>hvm</type>
	I0924 01:25:05.220912   69667 main.go:141] libmachine: (auto-447054)     <boot dev='cdrom'/>
	I0924 01:25:05.220921   69667 main.go:141] libmachine: (auto-447054)     <boot dev='hd'/>
	I0924 01:25:05.220930   69667 main.go:141] libmachine: (auto-447054)     <bootmenu enable='no'/>
	I0924 01:25:05.220939   69667 main.go:141] libmachine: (auto-447054)   </os>
	I0924 01:25:05.220947   69667 main.go:141] libmachine: (auto-447054)   <devices>
	I0924 01:25:05.220964   69667 main.go:141] libmachine: (auto-447054)     <disk type='file' device='cdrom'>
	I0924 01:25:05.220979   69667 main.go:141] libmachine: (auto-447054)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054/boot2docker.iso'/>
	I0924 01:25:05.220987   69667 main.go:141] libmachine: (auto-447054)       <target dev='hdc' bus='scsi'/>
	I0924 01:25:05.221016   69667 main.go:141] libmachine: (auto-447054)       <readonly/>
	I0924 01:25:05.221033   69667 main.go:141] libmachine: (auto-447054)     </disk>
	I0924 01:25:05.221044   69667 main.go:141] libmachine: (auto-447054)     <disk type='file' device='disk'>
	I0924 01:25:05.221065   69667 main.go:141] libmachine: (auto-447054)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 01:25:05.221089   69667 main.go:141] libmachine: (auto-447054)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/auto-447054/auto-447054.rawdisk'/>
	I0924 01:25:05.221102   69667 main.go:141] libmachine: (auto-447054)       <target dev='hda' bus='virtio'/>
	I0924 01:25:05.221112   69667 main.go:141] libmachine: (auto-447054)     </disk>
	I0924 01:25:05.221120   69667 main.go:141] libmachine: (auto-447054)     <interface type='network'>
	I0924 01:25:05.221141   69667 main.go:141] libmachine: (auto-447054)       <source network='mk-auto-447054'/>
	I0924 01:25:05.221152   69667 main.go:141] libmachine: (auto-447054)       <model type='virtio'/>
	I0924 01:25:05.221159   69667 main.go:141] libmachine: (auto-447054)     </interface>
	I0924 01:25:05.221168   69667 main.go:141] libmachine: (auto-447054)     <interface type='network'>
	I0924 01:25:05.221176   69667 main.go:141] libmachine: (auto-447054)       <source network='default'/>
	I0924 01:25:05.221190   69667 main.go:141] libmachine: (auto-447054)       <model type='virtio'/>
	I0924 01:25:05.221199   69667 main.go:141] libmachine: (auto-447054)     </interface>
	I0924 01:25:05.221206   69667 main.go:141] libmachine: (auto-447054)     <serial type='pty'>
	I0924 01:25:05.221216   69667 main.go:141] libmachine: (auto-447054)       <target port='0'/>
	I0924 01:25:05.221224   69667 main.go:141] libmachine: (auto-447054)     </serial>
	I0924 01:25:05.221232   69667 main.go:141] libmachine: (auto-447054)     <console type='pty'>
	I0924 01:25:05.221241   69667 main.go:141] libmachine: (auto-447054)       <target type='serial' port='0'/>
	I0924 01:25:05.221249   69667 main.go:141] libmachine: (auto-447054)     </console>
	I0924 01:25:05.221264   69667 main.go:141] libmachine: (auto-447054)     <rng model='virtio'>
	I0924 01:25:05.221273   69667 main.go:141] libmachine: (auto-447054)       <backend model='random'>/dev/random</backend>
	I0924 01:25:05.221279   69667 main.go:141] libmachine: (auto-447054)     </rng>
	I0924 01:25:05.221286   69667 main.go:141] libmachine: (auto-447054)     
	I0924 01:25:05.221298   69667 main.go:141] libmachine: (auto-447054)     
	I0924 01:25:05.221308   69667 main.go:141] libmachine: (auto-447054)   </devices>
	I0924 01:25:05.221315   69667 main.go:141] libmachine: (auto-447054) </domain>
	I0924 01:25:05.221327   69667 main.go:141] libmachine: (auto-447054) 
	I0924 01:25:05.225705   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:1b:cb:fe in network default
	I0924 01:25:05.226283   69667 main.go:141] libmachine: (auto-447054) Ensuring networks are active...
	I0924 01:25:05.226322   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:05.226992   69667 main.go:141] libmachine: (auto-447054) Ensuring network default is active
	I0924 01:25:05.227432   69667 main.go:141] libmachine: (auto-447054) Ensuring network mk-auto-447054 is active
	I0924 01:25:05.228217   69667 main.go:141] libmachine: (auto-447054) Getting domain xml...
	I0924 01:25:05.229215   69667 main.go:141] libmachine: (auto-447054) Creating domain...
	I0924 01:25:06.625680   69667 main.go:141] libmachine: (auto-447054) Waiting to get IP...
	I0924 01:25:06.626704   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:06.627104   69667 main.go:141] libmachine: (auto-447054) DBG | unable to find current IP address of domain auto-447054 in network mk-auto-447054
	I0924 01:25:06.627156   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:06.627092   69690 retry.go:31] will retry after 260.065482ms: waiting for machine to come up
	I0924 01:25:06.888777   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:06.889460   69667 main.go:141] libmachine: (auto-447054) DBG | unable to find current IP address of domain auto-447054 in network mk-auto-447054
	I0924 01:25:06.889486   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:06.889396   69690 retry.go:31] will retry after 294.799406ms: waiting for machine to come up
	I0924 01:25:07.186025   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:07.186747   69667 main.go:141] libmachine: (auto-447054) DBG | unable to find current IP address of domain auto-447054 in network mk-auto-447054
	I0924 01:25:07.186769   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:07.186658   69690 retry.go:31] will retry after 456.072622ms: waiting for machine to come up
	I0924 01:25:07.644224   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:07.644669   69667 main.go:141] libmachine: (auto-447054) DBG | unable to find current IP address of domain auto-447054 in network mk-auto-447054
	I0924 01:25:07.644706   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:07.644632   69690 retry.go:31] will retry after 508.958538ms: waiting for machine to come up
	I0924 01:25:08.155211   69667 main.go:141] libmachine: (auto-447054) DBG | domain auto-447054 has defined MAC address 52:54:00:38:9e:05 in network mk-auto-447054
	I0924 01:25:08.155762   69667 main.go:141] libmachine: (auto-447054) DBG | unable to find current IP address of domain auto-447054 in network mk-auto-447054
	I0924 01:25:08.155806   69667 main.go:141] libmachine: (auto-447054) DBG | I0924 01:25:08.155717   69690 retry.go:31] will retry after 607.794393ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.013148170Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ce8569c1dfccb384018a2aee201b13d1805ba935f207727f8a2651b9cc7904c0,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-lbm9h,Uid:fa504c09-2e16-4a5f-b4b3-a47f0733333d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140141635170574,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-lbm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa504c09-2e16-4a5f-b4b3-a47f0733333d,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:09:01.025028028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:364a4d4a-7316-48d0-a3e1-1dedff564dfb,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140141444497738,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-24T01:09:01.137742639Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7295k,Uid:3261d435-8cb5-4712-8459-26ba766e88e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140139697880170,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:08:59.381253489Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-r6tcj,Uid:df80e9b5-4b43-4b8f
-992e-8813ceca39fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140139670464604,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df80e9b5-4b43-4b8f-992e-8813ceca39fe,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:08:59.352518577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&PodSandboxMetadata{Name:kube-proxy-mwtkg,Uid:6a893121-8161-4fbc-bb59-1e08483e82b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140139286349110,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-24T01:08:58.375895741Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-650507,Uid:74a5b368bb311b8dfe8645c792d9f518,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727140128524895838,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.104:8443,kubernetes.io/config.hash: 74a5b368bb311b8dfe8645c792d9f518,kubernetes.io/config.seen: 2024-09-24T01:08:48.065399653Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ca3a24420e277aa560413f32b1b0
d32c91c5a4428c0931384c214e40259d996a,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-650507,Uid:0345d91892f0fc6339534c66f70a20a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140128523803688,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.104:2379,kubernetes.io/config.hash: 0345d91892f0fc6339534c66f70a20a5,kubernetes.io/config.seen: 2024-09-24T01:08:48.065395237Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-650507,Uid:9a853459933d33d31d91a6ce8922f864,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140128522254394,Labels:m
ap[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a853459933d33d31d91a6ce8922f864,kubernetes.io/config.seen: 2024-09-24T01:08:48.065401302Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-650507,Uid:37bc3af1aeab493bbfdd6a891ec43ade,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727140128516151797,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: 37bc3af1aeab493bbfdd6a891ec43ade,kubernetes.io/config.seen: 2024-09-24T01:08:48.065402522Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-650507,Uid:74a5b368bb311b8dfe8645c792d9f518,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727139843907438368,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.104:8443,kubernetes.io/config.hash: 74a5b368bb311b8dfe8645c792d9f518,kubernetes.io/config.seen: 2024-09-24T01:04:03.412852031Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=319bec2c-ea83-4f00-bf51-1ea837cb7834 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.014415670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b1a6c05-a8b7-4f22-a20e-085cc5081238 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.014492379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b1a6c05-a8b7-4f22-a20e-085cc5081238 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.015096285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b1a6c05-a8b7-4f22-a20e-085cc5081238 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.054443287Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a08b3a0-2dda-4dac-81cf-92385cd54111 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.054518137Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a08b3a0-2dda-4dac-81cf-92385cd54111 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.055750307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4df9a201-61c0-484c-964f-3d7855c84cce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.056122115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141110056098702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4df9a201-61c0-484c-964f-3d7855c84cce name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.057149642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2f47dea-b906-49f2-8d87-b4fdb7c5009c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.057215487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2f47dea-b906-49f2-8d87-b4fdb7c5009c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.057517675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2f47dea-b906-49f2-8d87-b4fdb7c5009c name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.094936971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46edb3af-dec5-4966-a281-182d617d2298 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.095006447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46edb3af-dec5-4966-a281-182d617d2298 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.096265450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b88fc019-996b-48c5-899a-764eea9ef326 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.096784118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141110096759283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b88fc019-996b-48c5-899a-764eea9ef326 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.097519328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0578bab-f1d4-44bd-93db-ef991d9e2739 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.097606074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0578bab-f1d4-44bd-93db-ef991d9e2739 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.097863865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0578bab-f1d4-44bd-93db-ef991d9e2739 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.129497235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5b0e02a-440d-43bc-a904-b14a919240b2 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.129620433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5b0e02a-440d-43bc-a904-b14a919240b2 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.130777806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42b5889d-93fa-4ca2-89c2-8e9e67810526 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.131142115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141110131121434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42b5889d-93fa-4ca2-89c2-8e9e67810526 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.131723808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=967b3c90-98dd-4bb8-9a73-2c8fbe388935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.131780383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=967b3c90-98dd-4bb8-9a73-2c8fbe388935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:10 embed-certs-650507 crio[708]: time="2024-09-24 01:25:10.131966978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1,PodSandboxId:b82ddf390aae73e4e41ff9954830ddc6ab5bb8978df5b56ce6763522b64e1814,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140141549982433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364a4d4a-7316-48d0-a3e1-1dedff564dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30,PodSandboxId:0fb08ba989a064c0c9ccb50e9dddd093aba3b996e61c42d48e65d866571b7c1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140489200348,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7295k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3261d435-8cb5-4712-8459-26ba766e88e0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c,PodSandboxId:f42ebfa5722079c7c60a97ac208b5794d36232c216e36c2854884643729682b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140140476657963,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r6tcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
f80e9b5-4b43-4b8f-992e-8813ceca39fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0,PodSandboxId:b94c7b554387c34ec87cdd69492aad2c021a76ac84aaa2a8764eeb1a80dd7032,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727140139731934145,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwtkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a893121-8161-4fbc-bb59-1e08483e82b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5,PodSandboxId:23cd8f9efeccb8376763e45065124c44dccbf3e9883a12e8fd4f1df69b89e65b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140128712219871,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37bc3af1aeab493bbfdd6a891ec43ade,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91,PodSandboxId:b1ca50f340010d99d941b16752b9715e43cdfbf119e472028cff0737852a66f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140128752143250,Labels:map[string]st
ring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef,PodSandboxId:ca3a24420e277aa560413f32b1b0d32c91c5a4428c0931384c214e40259d996a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140128715072344,Labels:map[string]string{io.kubernetes.con
tainer.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0345d91892f0fc6339534c66f70a20a5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9,PodSandboxId:3d8b126216a2d0f7c74e64ffd07546d29a9065a6c4e90eab42212b5757b6e78c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140128692086373,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a853459933d33d31d91a6ce8922f864,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018,PodSandboxId:7e7cfa3bae812cf57caa43231975757927fcfa0cf01c076f1f45d9d0898ba881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139844770755880,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-650507,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74a5b368bb311b8dfe8645c792d9f518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=967b3c90-98dd-4bb8-9a73-2c8fbe388935 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	893850a1eae8c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   b82ddf390aae7       storage-provisioner
	074b3dcea6a1b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   0fb08ba989a06       coredns-7c65d6cfc9-7295k
	a2b8d78fea47d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   f42ebfa572207       coredns-7c65d6cfc9-r6tcj
	eae4121650134       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   b94c7b554387c       kube-proxy-mwtkg
	890822add546c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   b1ca50f340010       kube-apiserver-embed-certs-650507
	4835c3bf7d1f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   ca3a24420e277       etcd-embed-certs-650507
	ceccfc5326d1f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   23cd8f9efeccb       kube-scheduler-embed-certs-650507
	357d70ef1ae9b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   3d8b126216a2d       kube-controller-manager-embed-certs-650507
	bd8c1d0aaf17e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   7e7cfa3bae812       kube-apiserver-embed-certs-650507
	
	
	==> coredns [074b3dcea6a1b842287d4ea05df3b6a34ae74b62d49fb45c4f756af35a190e30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a2b8d78fea47d7db3004dd9e26fa2cea4c47a902a9cee8843a5025632022965c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-650507
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-650507
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=embed-certs-650507
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 01:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-650507
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 01:25:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 01:24:24 +0000   Tue, 24 Sep 2024 01:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 01:24:24 +0000   Tue, 24 Sep 2024 01:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 01:24:24 +0000   Tue, 24 Sep 2024 01:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 01:24:24 +0000   Tue, 24 Sep 2024 01:08:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    embed-certs-650507
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44e7d2d592684cc6a2e6581d52cb1b33
	  System UUID:                44e7d2d5-9268-4cc6-a2e6-581d52cb1b33
	  Boot ID:                    7e039e3c-94a1-4e52-a044-820a2cf693d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7295k                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-r6tcj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-650507                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-650507             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-650507    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-mwtkg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-650507             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-lbm9h               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-650507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-650507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-650507 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-650507 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-650507 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-650507 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-650507 event: Registered Node embed-certs-650507 in Controller
	
	
	==> dmesg <==
	[  +0.052009] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.751809] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956284] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561546] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.354312] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.062170] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065788] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.188111] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.107716] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.282249] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[Sep24 01:04] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.116916] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.071985] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.518051] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.817598] kauditd_printk_skb: 85 callbacks suppressed
	[Sep24 01:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.605181] systemd-fstab-generator[2578]: Ignoring "noauto" option for root device
	[  +4.564894] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.995677] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +5.282686] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.120909] systemd-fstab-generator[3045]: Ignoring "noauto" option for root device
	[Sep24 01:09] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [4835c3bf7d1f3b66b01e41b51bb9e6385ab3c81209ab7dd51a8872f040e2c1ef] <==
	{"level":"info","ts":"2024-09-24T01:08:49.272910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T01:08:49.272932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgPreVoteResp from 223628dc6b2f68bd at term 1"}
	{"level":"info","ts":"2024-09-24T01:08:49.272984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.272992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd received MsgVoteResp from 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.273001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"223628dc6b2f68bd became leader at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.273008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 223628dc6b2f68bd elected leader 223628dc6b2f68bd at term 2"}
	{"level":"info","ts":"2024-09-24T01:08:49.277843Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.278857Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"223628dc6b2f68bd","local-member-attributes":"{Name:embed-certs-650507 ClientURLs:[https://192.168.39.104:2379]}","request-path":"/0/members/223628dc6b2f68bd/attributes","cluster-id":"bcba49d8b8764a98","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T01:08:49.278899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:08:49.279587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:08:49.282936Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:08:49.284725Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T01:08:49.284762Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T01:08:49.284900Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bcba49d8b8764a98","local-member-id":"223628dc6b2f68bd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.285019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.285057Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:08:49.285764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:08:49.291039Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.104:2379"}
	{"level":"info","ts":"2024-09-24T01:08:49.292290Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T01:18:49.723937Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-09-24T01:18:49.742312Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"17.882541ms","hash":194084952,"current-db-size-bytes":2383872,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2383872,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-09-24T01:18:49.742411Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":194084952,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-09-24T01:23:49.750292Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-09-24T01:23:49.754172Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"3.470381ms","hash":3427478936,"current-db-size-bytes":2383872,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-24T01:23:49.754234Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3427478936,"revision":966,"compact-revision":723}
	
	
	==> kernel <==
	 01:25:10 up 21 min,  0 users,  load average: 0.08, 0.07, 0.11
	Linux embed-certs-650507 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [890822add546c0fd43b338fd565507418185f3e807ae432e09dce95b4cca1a91] <==
	I0924 01:21:52.516971       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:21:52.517046       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:23:51.515620       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:23:51.516100       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 01:23:52.518198       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 01:23:52.518206       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:23:52.518433       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0924 01:23:52.518476       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:23:52.519660       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:23:52.519755       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:24:52.520838       1 handler_proxy.go:99] no RequestInfo found in the context
	W0924 01:24:52.520881       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:24:52.521121       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0924 01:24:52.521060       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 01:24:52.522302       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:24:52.522399       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bd8c1d0aaf17e67e6d1d2e104a7557c447d1d6304e0361a1f92190efe6cb6018] <==
	W0924 01:08:44.660457       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.666510       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.728319       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.781951       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.793848       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.794095       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.851642       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.865337       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.886477       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.922309       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.925848       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:44.981370       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.006059       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.038422       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.044044       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.056137       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.130998       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.167968       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.194158       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.299543       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.313429       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.376271       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.401982       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.546206       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:08:45.716262       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [357d70ef1ae9b34d495f10b22cb93900bcc8c88e39fd42dd8de59ee644b5b3b9] <==
	E0924 01:19:58.623501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:19:59.080125       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:20:13.388485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="226.482µs"
	I0924 01:20:27.384294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="99.002µs"
	E0924 01:20:28.630948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:20:29.089546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:20:58.637968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:20:59.097104       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:21:28.643890       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:21:29.110806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:21:58.653333       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:21:59.120014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:22:28.665931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:22:29.127591       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:22:58.672100       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:22:59.135545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:23:28.678757       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:23:29.149851       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:23:58.689322       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:23:59.157899       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:24:24.408343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-650507"
	E0924 01:24:28.696170       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:24:29.165868       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:24:58.702108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:24:59.173491       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eae4121650134f3b46606d6217bb0062ac5d419292637807cdf764cf3fa012d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 01:09:00.563125       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 01:09:00.631243       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.104"]
	E0924 01:09:00.631325       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 01:09:00.973706       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 01:09:00.973854       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 01:09:00.973956       1 server_linux.go:169] "Using iptables Proxier"
	I0924 01:09:00.977098       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 01:09:00.977605       1 server.go:483] "Version info" version="v1.31.1"
	I0924 01:09:00.977867       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:09:00.980725       1 config.go:105] "Starting endpoint slice config controller"
	I0924 01:09:00.980756       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 01:09:00.988631       1 config.go:199] "Starting service config controller"
	I0924 01:09:00.991221       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 01:09:00.994164       1 config.go:328] "Starting node config controller"
	I0924 01:09:00.994276       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 01:09:01.080992       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 01:09:01.094479       1 shared_informer.go:320] Caches are synced for service config
	I0924 01:09:01.095355       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ceccfc5326d1fe281b19d10f3a876c5e1ba33e9f3ed5ee5a270e1920e8a64db5] <==
	W0924 01:08:51.545018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:51.545323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:51.545026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:51.545357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:51.545767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 01:08:51.545899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.363026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 01:08:52.363187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.418338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 01:08:52.418828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.440158       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 01:08:52.440342       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 01:08:52.515830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:52.515991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.600884       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 01:08:52.601016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.667702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:52.667787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.716296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 01:08:52.716382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.813987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:08:52.814467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:08:52.841346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 01:08:52.841393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 01:08:54.533841       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 01:23:56 embed-certs-650507 kubelet[2905]: E0924 01:23:56.368088    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:24:04 embed-certs-650507 kubelet[2905]: E0924 01:24:04.631393    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141044631007628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:04 embed-certs-650507 kubelet[2905]: E0924 01:24:04.631718    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141044631007628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:08 embed-certs-650507 kubelet[2905]: E0924 01:24:08.367516    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:24:14 embed-certs-650507 kubelet[2905]: E0924 01:24:14.633446    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141054632972304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:14 embed-certs-650507 kubelet[2905]: E0924 01:24:14.633542    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141054632972304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:19 embed-certs-650507 kubelet[2905]: E0924 01:24:19.367174    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:24:24 embed-certs-650507 kubelet[2905]: E0924 01:24:24.634707    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141064634357339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:24 embed-certs-650507 kubelet[2905]: E0924 01:24:24.634773    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141064634357339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:31 embed-certs-650507 kubelet[2905]: E0924 01:24:31.366965    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:24:34 embed-certs-650507 kubelet[2905]: E0924 01:24:34.637027    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141074636535316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:34 embed-certs-650507 kubelet[2905]: E0924 01:24:34.637604    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141074636535316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:44 embed-certs-650507 kubelet[2905]: E0924 01:24:44.639196    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141084638857736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:44 embed-certs-650507 kubelet[2905]: E0924 01:24:44.639236    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141084638857736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:46 embed-certs-650507 kubelet[2905]: E0924 01:24:46.368428    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:24:54 embed-certs-650507 kubelet[2905]: E0924 01:24:54.380205    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 01:24:54 embed-certs-650507 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 01:24:54 embed-certs-650507 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 01:24:54 embed-certs-650507 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 01:24:54 embed-certs-650507 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 01:24:54 embed-certs-650507 kubelet[2905]: E0924 01:24:54.642021    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141094641512631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:54 embed-certs-650507 kubelet[2905]: E0924 01:24:54.642053    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141094641512631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:57 embed-certs-650507 kubelet[2905]: E0924 01:24:57.367285    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lbm9h" podUID="fa504c09-2e16-4a5f-b4b3-a47f0733333d"
	Sep 24 01:25:04 embed-certs-650507 kubelet[2905]: E0924 01:25:04.643357    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141104642960282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:04 embed-certs-650507 kubelet[2905]: E0924 01:25:04.643699    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141104642960282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [893850a1eae8ca98a68d0ba4fe2186a6866b37671253fe43630c39f54e1f5ab1] <==
	I0924 01:09:01.668873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 01:09:01.683354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 01:09:01.683454       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 01:09:01.694317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 01:09:01.694732       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-650507_86e1500b-f31a-424e-b809-06721c823370!
	I0924 01:09:01.694911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"147b93bc-c19d-4705-8f8d-573893a60402", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-650507_86e1500b-f31a-424e-b809-06721c823370 became leader
	I0924 01:09:01.795679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-650507_86e1500b-f31a-424e-b809-06721c823370!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-650507 -n embed-certs-650507
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-650507 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lbm9h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-650507 describe pod metrics-server-6867b74b74-lbm9h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-650507 describe pod metrics-server-6867b74b74-lbm9h: exit status 1 (72.29553ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lbm9h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-650507 describe pod metrics-server-6867b74b74-lbm9h: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (302.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-674057 -n no-preload-674057
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-24 01:25:00.523591237 +0000 UTC m=+6441.954673687
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-674057 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-674057 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.855µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-674057 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-674057 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-674057 logs -n 25: (1.210981229s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-619300                           | kubernetes-upgrade-619300    | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:24 UTC | 24 Sep 24 01:24 UTC |
	| start   | -p newest-cni-185978 --memory=2200 --alsologtostderr   | newest-cni-185978            | jenkins | v1.34.0 | 24 Sep 24 01:24 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:24:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:24:40.256563   69197 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:24:40.256829   69197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:24:40.256840   69197 out.go:358] Setting ErrFile to fd 2...
	I0924 01:24:40.256845   69197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:24:40.257040   69197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:24:40.257608   69197 out.go:352] Setting JSON to false
	I0924 01:24:40.258644   69197 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7624,"bootTime":1727133456,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:24:40.258757   69197 start.go:139] virtualization: kvm guest
	I0924 01:24:40.261520   69197 out.go:177] * [newest-cni-185978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:24:40.262813   69197 notify.go:220] Checking for updates...
	I0924 01:24:40.262902   69197 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:24:40.264444   69197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:24:40.265828   69197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:24:40.267261   69197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:24:40.268551   69197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:24:40.269930   69197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:24:40.271877   69197 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:24:40.272017   69197 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:24:40.272156   69197 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:24:40.272278   69197 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:24:40.311278   69197 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 01:24:40.312670   69197 start.go:297] selected driver: kvm2
	I0924 01:24:40.312690   69197 start.go:901] validating driver "kvm2" against <nil>
	I0924 01:24:40.312704   69197 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:24:40.313828   69197 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:24:40.313924   69197 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:24:40.330159   69197 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:24:40.330203   69197 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0924 01:24:40.330282   69197 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0924 01:24:40.330514   69197 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0924 01:24:40.330543   69197 cni.go:84] Creating CNI manager for ""
	I0924 01:24:40.330586   69197 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:24:40.330603   69197 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 01:24:40.330656   69197 start.go:340] cluster config:
	{Name:newest-cni-185978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-185978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:24:40.330755   69197 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:24:40.332460   69197 out.go:177] * Starting "newest-cni-185978" primary control-plane node in "newest-cni-185978" cluster
	I0924 01:24:40.333833   69197 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:24:40.333879   69197 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0924 01:24:40.333890   69197 cache.go:56] Caching tarball of preloaded images
	I0924 01:24:40.334001   69197 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:24:40.334014   69197 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0924 01:24:40.334105   69197 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/config.json ...
	I0924 01:24:40.334126   69197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/newest-cni-185978/config.json: {Name:mkfe29a8cfe82fdffb5216aa02caa939f9b8f0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:24:40.334275   69197 start.go:360] acquireMachinesLock for newest-cni-185978: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:24:40.334310   69197 start.go:364] duration metric: took 21.502µs to acquireMachinesLock for "newest-cni-185978"
	I0924 01:24:40.334328   69197 start.go:93] Provisioning new machine with config: &{Name:newest-cni-185978 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-185978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:24:40.334378   69197 start.go:125] createHost starting for "" (driver="kvm2")
	I0924 01:24:40.336097   69197 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 01:24:40.336315   69197 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:24:40.336396   69197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:24:40.351076   69197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0924 01:24:40.351513   69197 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:24:40.352079   69197 main.go:141] libmachine: Using API Version  1
	I0924 01:24:40.352111   69197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:24:40.352554   69197 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:24:40.352748   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetMachineName
	I0924 01:24:40.352924   69197 main.go:141] libmachine: (newest-cni-185978) Calling .DriverName
	I0924 01:24:40.353105   69197 start.go:159] libmachine.API.Create for "newest-cni-185978" (driver="kvm2")
	I0924 01:24:40.353138   69197 client.go:168] LocalClient.Create starting
	I0924 01:24:40.353172   69197 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem
	I0924 01:24:40.353209   69197 main.go:141] libmachine: Decoding PEM data...
	I0924 01:24:40.353230   69197 main.go:141] libmachine: Parsing certificate...
	I0924 01:24:40.353308   69197 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem
	I0924 01:24:40.353335   69197 main.go:141] libmachine: Decoding PEM data...
	I0924 01:24:40.353348   69197 main.go:141] libmachine: Parsing certificate...
	I0924 01:24:40.353372   69197 main.go:141] libmachine: Running pre-create checks...
	I0924 01:24:40.353381   69197 main.go:141] libmachine: (newest-cni-185978) Calling .PreCreateCheck
	I0924 01:24:40.353824   69197 main.go:141] libmachine: (newest-cni-185978) Calling .GetConfigRaw
	I0924 01:24:40.354461   69197 main.go:141] libmachine: Creating machine...
	I0924 01:24:40.354481   69197 main.go:141] libmachine: (newest-cni-185978) Calling .Create
	I0924 01:24:40.354662   69197 main.go:141] libmachine: (newest-cni-185978) Creating KVM machine...
	I0924 01:24:40.356062   69197 main.go:141] libmachine: (newest-cni-185978) DBG | found existing default KVM network
	I0924 01:24:40.357388   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.357210   69220 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:99:1c} reservation:<nil>}
	I0924 01:24:40.358067   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.357989   69220 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:1b:c7} reservation:<nil>}
	I0924 01:24:40.358830   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.358717   69220 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f1:05:9c} reservation:<nil>}
	I0924 01:24:40.359815   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.359739   69220 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003523e0}
	I0924 01:24:40.359855   69197 main.go:141] libmachine: (newest-cni-185978) DBG | created network xml: 
	I0924 01:24:40.359881   69197 main.go:141] libmachine: (newest-cni-185978) DBG | <network>
	I0924 01:24:40.359893   69197 main.go:141] libmachine: (newest-cni-185978) DBG |   <name>mk-newest-cni-185978</name>
	I0924 01:24:40.359903   69197 main.go:141] libmachine: (newest-cni-185978) DBG |   <dns enable='no'/>
	I0924 01:24:40.359912   69197 main.go:141] libmachine: (newest-cni-185978) DBG |   
	I0924 01:24:40.359920   69197 main.go:141] libmachine: (newest-cni-185978) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0924 01:24:40.359927   69197 main.go:141] libmachine: (newest-cni-185978) DBG |     <dhcp>
	I0924 01:24:40.359935   69197 main.go:141] libmachine: (newest-cni-185978) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0924 01:24:40.359943   69197 main.go:141] libmachine: (newest-cni-185978) DBG |     </dhcp>
	I0924 01:24:40.359953   69197 main.go:141] libmachine: (newest-cni-185978) DBG |   </ip>
	I0924 01:24:40.359960   69197 main.go:141] libmachine: (newest-cni-185978) DBG |   
	I0924 01:24:40.359966   69197 main.go:141] libmachine: (newest-cni-185978) DBG | </network>
	I0924 01:24:40.359974   69197 main.go:141] libmachine: (newest-cni-185978) DBG | 
	I0924 01:24:40.365623   69197 main.go:141] libmachine: (newest-cni-185978) DBG | trying to create private KVM network mk-newest-cni-185978 192.168.72.0/24...
	I0924 01:24:40.447536   69197 main.go:141] libmachine: (newest-cni-185978) DBG | private KVM network mk-newest-cni-185978 192.168.72.0/24 created
	I0924 01:24:40.447591   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.447497   69220 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:24:40.447612   69197 main.go:141] libmachine: (newest-cni-185978) Setting up store path in /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978 ...
	I0924 01:24:40.447632   69197 main.go:141] libmachine: (newest-cni-185978) Building disk image from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0924 01:24:40.447748   69197 main.go:141] libmachine: (newest-cni-185978) Downloading /home/jenkins/minikube-integration/19696-7623/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0924 01:24:40.715377   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.715196   69220 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/id_rsa...
	I0924 01:24:40.815375   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.815225   69220 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/newest-cni-185978.rawdisk...
	I0924 01:24:40.815422   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Writing magic tar header
	I0924 01:24:40.815440   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Writing SSH key tar header
	I0924 01:24:40.815450   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:40.815365   69220 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978 ...
	I0924 01:24:40.815531   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978
	I0924 01:24:40.815592   69197 main.go:141] libmachine: (newest-cni-185978) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978 (perms=drwx------)
	I0924 01:24:40.815609   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube/machines
	I0924 01:24:40.815622   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:24:40.815633   69197 main.go:141] libmachine: (newest-cni-185978) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube/machines (perms=drwxr-xr-x)
	I0924 01:24:40.815647   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19696-7623
	I0924 01:24:40.815660   69197 main.go:141] libmachine: (newest-cni-185978) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623/.minikube (perms=drwxr-xr-x)
	I0924 01:24:40.815670   69197 main.go:141] libmachine: (newest-cni-185978) Setting executable bit set on /home/jenkins/minikube-integration/19696-7623 (perms=drwxrwxr-x)
	I0924 01:24:40.815677   69197 main.go:141] libmachine: (newest-cni-185978) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0924 01:24:40.815684   69197 main.go:141] libmachine: (newest-cni-185978) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0924 01:24:40.815691   69197 main.go:141] libmachine: (newest-cni-185978) Creating domain...
	I0924 01:24:40.815700   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0924 01:24:40.815707   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Checking permissions on dir: /home/jenkins
	I0924 01:24:40.815715   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Checking permissions on dir: /home
	I0924 01:24:40.815729   69197 main.go:141] libmachine: (newest-cni-185978) DBG | Skipping /home - not owner
	I0924 01:24:40.817259   69197 main.go:141] libmachine: (newest-cni-185978) define libvirt domain using xml: 
	I0924 01:24:40.817279   69197 main.go:141] libmachine: (newest-cni-185978) <domain type='kvm'>
	I0924 01:24:40.817286   69197 main.go:141] libmachine: (newest-cni-185978)   <name>newest-cni-185978</name>
	I0924 01:24:40.817291   69197 main.go:141] libmachine: (newest-cni-185978)   <memory unit='MiB'>2200</memory>
	I0924 01:24:40.817297   69197 main.go:141] libmachine: (newest-cni-185978)   <vcpu>2</vcpu>
	I0924 01:24:40.817300   69197 main.go:141] libmachine: (newest-cni-185978)   <features>
	I0924 01:24:40.817319   69197 main.go:141] libmachine: (newest-cni-185978)     <acpi/>
	I0924 01:24:40.817327   69197 main.go:141] libmachine: (newest-cni-185978)     <apic/>
	I0924 01:24:40.817333   69197 main.go:141] libmachine: (newest-cni-185978)     <pae/>
	I0924 01:24:40.817336   69197 main.go:141] libmachine: (newest-cni-185978)     
	I0924 01:24:40.817341   69197 main.go:141] libmachine: (newest-cni-185978)   </features>
	I0924 01:24:40.817361   69197 main.go:141] libmachine: (newest-cni-185978)   <cpu mode='host-passthrough'>
	I0924 01:24:40.817369   69197 main.go:141] libmachine: (newest-cni-185978)   
	I0924 01:24:40.817373   69197 main.go:141] libmachine: (newest-cni-185978)   </cpu>
	I0924 01:24:40.817378   69197 main.go:141] libmachine: (newest-cni-185978)   <os>
	I0924 01:24:40.817383   69197 main.go:141] libmachine: (newest-cni-185978)     <type>hvm</type>
	I0924 01:24:40.817390   69197 main.go:141] libmachine: (newest-cni-185978)     <boot dev='cdrom'/>
	I0924 01:24:40.817394   69197 main.go:141] libmachine: (newest-cni-185978)     <boot dev='hd'/>
	I0924 01:24:40.817401   69197 main.go:141] libmachine: (newest-cni-185978)     <bootmenu enable='no'/>
	I0924 01:24:40.817405   69197 main.go:141] libmachine: (newest-cni-185978)   </os>
	I0924 01:24:40.817464   69197 main.go:141] libmachine: (newest-cni-185978)   <devices>
	I0924 01:24:40.817501   69197 main.go:141] libmachine: (newest-cni-185978)     <disk type='file' device='cdrom'>
	I0924 01:24:40.817518   69197 main.go:141] libmachine: (newest-cni-185978)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/boot2docker.iso'/>
	I0924 01:24:40.817533   69197 main.go:141] libmachine: (newest-cni-185978)       <target dev='hdc' bus='scsi'/>
	I0924 01:24:40.817544   69197 main.go:141] libmachine: (newest-cni-185978)       <readonly/>
	I0924 01:24:40.817570   69197 main.go:141] libmachine: (newest-cni-185978)     </disk>
	I0924 01:24:40.817582   69197 main.go:141] libmachine: (newest-cni-185978)     <disk type='file' device='disk'>
	I0924 01:24:40.817592   69197 main.go:141] libmachine: (newest-cni-185978)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0924 01:24:40.817609   69197 main.go:141] libmachine: (newest-cni-185978)       <source file='/home/jenkins/minikube-integration/19696-7623/.minikube/machines/newest-cni-185978/newest-cni-185978.rawdisk'/>
	I0924 01:24:40.817626   69197 main.go:141] libmachine: (newest-cni-185978)       <target dev='hda' bus='virtio'/>
	I0924 01:24:40.817634   69197 main.go:141] libmachine: (newest-cni-185978)     </disk>
	I0924 01:24:40.817642   69197 main.go:141] libmachine: (newest-cni-185978)     <interface type='network'>
	I0924 01:24:40.817653   69197 main.go:141] libmachine: (newest-cni-185978)       <source network='mk-newest-cni-185978'/>
	I0924 01:24:40.817684   69197 main.go:141] libmachine: (newest-cni-185978)       <model type='virtio'/>
	I0924 01:24:40.817702   69197 main.go:141] libmachine: (newest-cni-185978)     </interface>
	I0924 01:24:40.817715   69197 main.go:141] libmachine: (newest-cni-185978)     <interface type='network'>
	I0924 01:24:40.817729   69197 main.go:141] libmachine: (newest-cni-185978)       <source network='default'/>
	I0924 01:24:40.817740   69197 main.go:141] libmachine: (newest-cni-185978)       <model type='virtio'/>
	I0924 01:24:40.817748   69197 main.go:141] libmachine: (newest-cni-185978)     </interface>
	I0924 01:24:40.817757   69197 main.go:141] libmachine: (newest-cni-185978)     <serial type='pty'>
	I0924 01:24:40.817766   69197 main.go:141] libmachine: (newest-cni-185978)       <target port='0'/>
	I0924 01:24:40.817775   69197 main.go:141] libmachine: (newest-cni-185978)     </serial>
	I0924 01:24:40.817781   69197 main.go:141] libmachine: (newest-cni-185978)     <console type='pty'>
	I0924 01:24:40.817791   69197 main.go:141] libmachine: (newest-cni-185978)       <target type='serial' port='0'/>
	I0924 01:24:40.817802   69197 main.go:141] libmachine: (newest-cni-185978)     </console>
	I0924 01:24:40.817813   69197 main.go:141] libmachine: (newest-cni-185978)     <rng model='virtio'>
	I0924 01:24:40.817823   69197 main.go:141] libmachine: (newest-cni-185978)       <backend model='random'>/dev/random</backend>
	I0924 01:24:40.817832   69197 main.go:141] libmachine: (newest-cni-185978)     </rng>
	I0924 01:24:40.817840   69197 main.go:141] libmachine: (newest-cni-185978)     
	I0924 01:24:40.817849   69197 main.go:141] libmachine: (newest-cni-185978)     
	I0924 01:24:40.817856   69197 main.go:141] libmachine: (newest-cni-185978)   </devices>
	I0924 01:24:40.817869   69197 main.go:141] libmachine: (newest-cni-185978) </domain>
	I0924 01:24:40.817881   69197 main.go:141] libmachine: (newest-cni-185978) 
	I0924 01:24:40.822408   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:56:00:3a in network default
	I0924 01:24:40.823126   69197 main.go:141] libmachine: (newest-cni-185978) Ensuring networks are active...
	I0924 01:24:40.823149   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:40.824045   69197 main.go:141] libmachine: (newest-cni-185978) Ensuring network default is active
	I0924 01:24:40.824372   69197 main.go:141] libmachine: (newest-cni-185978) Ensuring network mk-newest-cni-185978 is active
	I0924 01:24:40.825065   69197 main.go:141] libmachine: (newest-cni-185978) Getting domain xml...
	I0924 01:24:40.825900   69197 main.go:141] libmachine: (newest-cni-185978) Creating domain...
	I0924 01:24:42.124660   69197 main.go:141] libmachine: (newest-cni-185978) Waiting to get IP...
	I0924 01:24:42.125625   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:42.126086   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:42.126142   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:42.126063   69220 retry.go:31] will retry after 216.803728ms: waiting for machine to come up
	I0924 01:24:42.344561   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:42.345120   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:42.345147   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:42.345075   69220 retry.go:31] will retry after 315.275027ms: waiting for machine to come up
	I0924 01:24:42.662752   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:42.663260   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:42.663310   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:42.663225   69220 retry.go:31] will retry after 401.560482ms: waiting for machine to come up
	I0924 01:24:43.066657   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:43.067147   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:43.067182   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:43.067065   69220 retry.go:31] will retry after 523.512589ms: waiting for machine to come up
	I0924 01:24:43.591725   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:43.592185   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:43.592209   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:43.592140   69220 retry.go:31] will retry after 706.64976ms: waiting for machine to come up
	I0924 01:24:44.300262   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:44.300721   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:44.300765   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:44.300700   69220 retry.go:31] will retry after 844.28431ms: waiting for machine to come up
	I0924 01:24:45.146656   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:45.147043   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:45.147070   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:45.147016   69220 retry.go:31] will retry after 992.706531ms: waiting for machine to come up
	I0924 01:24:46.141360   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:46.141875   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:46.141898   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:46.141817   69220 retry.go:31] will retry after 1.09028405s: waiting for machine to come up
	I0924 01:24:47.233425   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:47.233913   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:47.233964   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:47.233802   69220 retry.go:31] will retry after 1.699089932s: waiting for machine to come up
	I0924 01:24:48.935768   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:48.936370   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:48.936426   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:48.936324   69220 retry.go:31] will retry after 1.567412413s: waiting for machine to come up
	I0924 01:24:50.505243   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:50.505780   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:50.505812   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:50.505748   69220 retry.go:31] will retry after 1.80258313s: waiting for machine to come up
	I0924 01:24:52.309489   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:52.309946   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:52.310001   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:52.309918   69220 retry.go:31] will retry after 2.497062256s: waiting for machine to come up
	I0924 01:24:54.809510   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:54.809979   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:54.810008   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:54.809949   69220 retry.go:31] will retry after 3.894731938s: waiting for machine to come up
	I0924 01:24:58.706568   69197 main.go:141] libmachine: (newest-cni-185978) DBG | domain newest-cni-185978 has defined MAC address 52:54:00:fa:98:80 in network mk-newest-cni-185978
	I0924 01:24:58.707112   69197 main.go:141] libmachine: (newest-cni-185978) DBG | unable to find current IP address of domain newest-cni-185978 in network mk-newest-cni-185978
	I0924 01:24:58.707142   69197 main.go:141] libmachine: (newest-cni-185978) DBG | I0924 01:24:58.707053   69220 retry.go:31] will retry after 4.575792003s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.158237325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141101158212000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c8ab932-4e63-4f2a-bf81-71382823b158 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.158977723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=151b9574-8b8a-45d3-b0d2-e345adf2a871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.159080050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=151b9574-8b8a-45d3-b0d2-e345adf2a871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.159913288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=151b9574-8b8a-45d3-b0d2-e345adf2a871 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.197630794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=249c4802-8dd8-4243-8602-f18bae0689ab name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.197738639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=249c4802-8dd8-4243-8602-f18bae0689ab name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.208836659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c35558c9-65bb-4ee2-be48-a9f43fdad852 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.209417117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141101209387614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c35558c9-65bb-4ee2-be48-a9f43fdad852 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.210281379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b90b3c0c-fa2e-4c96-a845-af9871fbc01e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.210400662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b90b3c0c-fa2e-4c96-a845-af9871fbc01e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.210681573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b90b3c0c-fa2e-4c96-a845-af9871fbc01e name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.247871314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d25dd34-1e05-4f8a-bb3e-2e73099f6b97 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.247945113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d25dd34-1e05-4f8a-bb3e-2e73099f6b97 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.249314945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=028920ea-a464-41e2-b0ad-28cb48573b2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.249655531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141101249633178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=028920ea-a464-41e2-b0ad-28cb48573b2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.250112428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ad30982-9ae9-4ea8-b86f-c22d137b0b22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.250166151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ad30982-9ae9-4ea8-b86f-c22d137b0b22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.250351725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ad30982-9ae9-4ea8-b86f-c22d137b0b22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.283466868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4939f30b-1f6c-4851-b6aa-7fbceba4bc37 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.283565629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4939f30b-1f6c-4851-b6aa-7fbceba4bc37 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.284805176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5db4f941-ee33-4193-a60f-ab4e5aa5d032 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.285495622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141101285466507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5db4f941-ee33-4193-a60f-ab4e5aa5d032 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.286342124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f158eb4c-fd01-4467-b128-8c1b11ff0f96 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.286397624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f158eb4c-fd01-4467-b128-8c1b11ff0f96 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:25:01 no-preload-674057 crio[713]: time="2024-09-24 01:25:01.287015931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a,PodSandboxId:57b40fcbd0807c17676ba374dbd40e2d75abea18ff315410bde80ed660c31c23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727140251972979489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 341fd764-a3bd-4d28-bc6a-6ec9fa8a5347,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6,PodSandboxId:fed5e74c9deb3cb771b4f49d24d0e43c93e894f00fe7b710bee37a619321ab7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250773846312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x7cv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e96941a-b045-48e2-be06-50cc29f8ec25,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4,PodSandboxId:f3501188d9975eaf62cb396040385cf0033a216e7b04e79c06685ffe9ee2d043,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727140250712878199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqwzr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97
73e4bf-9848-47d8-b87b-897fbdd22d42,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189,PodSandboxId:65ebe9c8dd9a0339573e9d93c2b64c305b85201df1f102fed70e753195cf5664,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1727140250544669443,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k54d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67ac411-52b5-4d58-9db3-d2d92b63a21f,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a,PodSandboxId:fab3a8a805035b1fc85813921d437bab10f5c1226e9b266f0ec5c6024a43e605,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727140239447011353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7de31ffdfb48cb7290a847c86901da6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8,PodSandboxId:0e77ff21d732e04e7b53fa1e4bc14a0da1db330c2e646dbd6d35d3068e41e38a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727140239463678924,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96541f6d2312e39b9e24036ad99634a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7,PodSandboxId:00f003002a73a80382ae79a7549edc7859ccf5c0a479dfc4924798e230c416fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727140239366255716,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abc52729a304907dc88bd3e55458bb01,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7,PodSandboxId:2a2c0e8c2b5e8eb30fe4047cfb4f117a54fc33989a27b847ba15d90174f28a16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727140239322135110,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f,PodSandboxId:c3e3133288637067f3b60490592cabf6d6e67fa80095eeadd16d5c3080c640ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727139937866763411,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-674057,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa7656a22c606fc5e77123d16ca79be6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f158eb4c-fd01-4467-b128-8c1b11ff0f96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	edf08e56311a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   57b40fcbd0807       storage-provisioner
	db4bc3c13ebdb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   fed5e74c9deb3       coredns-7c65d6cfc9-x7cv6
	7ef6d000d3e5e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   f3501188d9975       coredns-7c65d6cfc9-nqwzr
	744c86dbbd3bf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   65ebe9c8dd9a0       kube-proxy-k54d7
	0e6059401f3f3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 minutes ago      Running             kube-scheduler            2                   0e77ff21d732e       kube-scheduler-no-preload-674057
	acbd654a5d68d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   fab3a8a805035       etcd-no-preload-674057
	617bb5bd9dd23       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 minutes ago      Running             kube-controller-manager   3                   00f003002a73a       kube-controller-manager-no-preload-674057
	e73012bfbb327       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Running             kube-apiserver            3                   2a2c0e8c2b5e8       kube-apiserver-no-preload-674057
	c7b997752647b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            2                   c3e3133288637       kube-apiserver-no-preload-674057
	
	
	==> coredns [7ef6d000d3e5ecbe396992b96fddd175d3cb6df9d1824bb82ae9cbd56bed6ef4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [db4bc3c13ebdbc5c4539c991dbab846860e5b49cd7e690e6b49bd9215e9762f6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-674057
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-674057
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=no-preload-674057
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 01:10:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-674057
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 01:24:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 01:21:07 +0000   Tue, 24 Sep 2024 01:10:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 01:21:07 +0000   Tue, 24 Sep 2024 01:10:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 01:21:07 +0000   Tue, 24 Sep 2024 01:10:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 01:21:07 +0000   Tue, 24 Sep 2024 01:10:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.161
	  Hostname:    no-preload-674057
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9bafe769c6c4631983d312dbb40b799
	  System UUID:                f9bafe76-9c6c-4631-983d-312dbb40b799
	  Boot ID:                    6e5d1535-fa44-4599-9002-65ba3216c402
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-nqwzr                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-x7cv6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-674057                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-674057             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-674057    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-k54d7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-674057             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-w5j2x              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-674057 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-674057 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-674057 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-674057 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-674057 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-674057 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-674057 event: Registered Node no-preload-674057 in Controller
	
	
	==> dmesg <==
	[  +0.052968] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043521] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.099541] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.009811] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.540530] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.364632] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.057714] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064173] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.200241] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.137228] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.302198] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[Sep24 01:05] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +0.060516] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.274059] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +3.183009] kauditd_printk_skb: 87 callbacks suppressed
	[Sep24 01:06] kauditd_printk_skb: 88 callbacks suppressed
	[Sep24 01:10] systemd-fstab-generator[3120]: Ignoring "noauto" option for root device
	[  +0.059187] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.488274] systemd-fstab-generator[3447]: Ignoring "noauto" option for root device
	[  +0.080280] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.670829] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.169645] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[Sep24 01:11] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [acbd654a5d68d1567f2d1b46fc60c70b2ee89c7dcc7c3689321af0ba038eff0a] <==
	{"level":"info","ts":"2024-09-24T01:10:39.895246Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.161:2380"}
	{"level":"info","ts":"2024-09-24T01:10:39.895303Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.161:2380"}
	{"level":"info","ts":"2024-09-24T01:10:39.919127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T01:10:39.919331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T01:10:39.919423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgPreVoteResp from 3fbdf04b5b0eb504 at term 1"}
	{"level":"info","ts":"2024-09-24T01:10:39.919799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.919898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 received MsgVoteResp from 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.919928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3fbdf04b5b0eb504 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.919982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3fbdf04b5b0eb504 elected leader 3fbdf04b5b0eb504 at term 2"}
	{"level":"info","ts":"2024-09-24T01:10:39.924212Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.929392Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3fbdf04b5b0eb504","local-member-attributes":"{Name:no-preload-674057 ClientURLs:[https://192.168.50.161:2379]}","request-path":"/0/members/3fbdf04b5b0eb504/attributes","cluster-id":"9aa7cd058091608f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T01:10:39.929540Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:10:39.930710Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:10:39.933722Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.161:2379"}
	{"level":"info","ts":"2024-09-24T01:10:39.936202Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9aa7cd058091608f","local-member-id":"3fbdf04b5b0eb504","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.936298Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.936343Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T01:10:39.936578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T01:10:39.938682Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T01:10:39.939564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T01:10:39.942096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T01:10:39.947587Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T01:20:40.493161Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":716}
	{"level":"info","ts":"2024-09-24T01:20:40.504327Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":716,"took":"10.226167ms","hash":1630845829,"current-db-size-bytes":2232320,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2232320,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-09-24T01:20:40.504456Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1630845829,"revision":716,"compact-revision":-1}
	
	
	==> kernel <==
	 01:25:01 up 20 min,  0 users,  load average: 0.02, 0.09, 0.15
	Linux no-preload-674057 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c7b997752647b18a8adc4d689497d720e213df1ae0b65d5be49b0bb34cd09b1f] <==
	W0924 01:10:34.661096       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.770542       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.797509       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.836111       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.874696       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.899436       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:34.942687       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.010921       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.057684       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.062379       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.155977       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.181649       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.189376       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.204291       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.246814       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.262591       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.262822       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.295414       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.316840       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.441955       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.666436       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.918016       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.939316       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:35.949668       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0924 01:10:36.175893       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e73012bfbb327d6a0aded70e257513b7f40ed40bd11d289f20a1bfcdcbf97ab7] <==
	W0924 01:20:42.975481       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:20:42.975592       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:20:42.976562       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:20:42.976674       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:21:42.977234       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:21:42.977612       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0924 01:21:42.977244       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:21:42.977820       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0924 01:21:42.979087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:21:42.979233       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0924 01:23:42.979616       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:23:42.980086       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0924 01:23:42.980219       1 handler_proxy.go:99] no RequestInfo found in the context
	E0924 01:23:42.980275       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0924 01:23:42.981288       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:23:42.981363       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [617bb5bd9dd235df5ca90567c94ee2c487962b8737e1819dc58c34920fd9d6d7] <==
	E0924 01:19:48.934822       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:19:49.514102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:20:18.941678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:20:19.522541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:20:48.947655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:20:49.531850       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:21:07.284882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-674057"
	E0924 01:21:18.954432       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:21:19.541244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:21:48.962017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:21:49.550776       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0924 01:22:02.699314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="2.266117ms"
	I0924 01:22:14.700983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="117.373µs"
	E0924 01:22:18.969677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:22:19.559772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:22:48.975994       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:22:49.570806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:23:18.983192       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:23:19.579110       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:23:48.989927       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:23:49.586757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:24:18.995766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:24:19.595006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0924 01:24:49.003279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0924 01:24:49.605622       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [744c86dbbd3bf9e31e8873c6c7d05e0ac40c341d2a7c78069d5bce6b9aba1189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 01:10:51.143123       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 01:10:51.194686       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.161"]
	E0924 01:10:51.194780       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 01:10:51.356189       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 01:10:51.356234       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 01:10:51.356258       1 server_linux.go:169] "Using iptables Proxier"
	I0924 01:10:51.401797       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 01:10:51.402155       1 server.go:483] "Version info" version="v1.31.1"
	I0924 01:10:51.402180       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 01:10:51.404004       1 config.go:199] "Starting service config controller"
	I0924 01:10:51.404113       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 01:10:51.404147       1 config.go:105] "Starting endpoint slice config controller"
	I0924 01:10:51.404164       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 01:10:51.405810       1 config.go:328] "Starting node config controller"
	I0924 01:10:51.405844       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 01:10:51.504862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 01:10:51.504914       1 shared_informer.go:320] Caches are synced for service config
	I0924 01:10:51.506411       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0e6059401f3f315dc3255a45dc661eab16b66ea2e700e0b53a186b2cf0aa08a8] <==
	W0924 01:10:42.004614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 01:10:42.004639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.004803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:10:42.004867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.004990       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 01:10:42.005059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.852145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 01:10:42.852193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.894455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 01:10:42.894508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.912058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 01:10:42.912109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.953982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 01:10:42.954268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.974165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 01:10:42.974276       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:42.999085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 01:10:42.999267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:43.067100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 01:10:43.067217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 01:10:43.309997       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 01:10:43.310878       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 01:10:43.332218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 01:10:43.332271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 01:10:45.472715       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 01:23:51 no-preload-674057 kubelet[3454]: E0924 01:23:51.680146    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:23:54 no-preload-674057 kubelet[3454]: E0924 01:23:54.901992    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141034901331784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:23:54 no-preload-674057 kubelet[3454]: E0924 01:23:54.902090    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141034901331784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:04 no-preload-674057 kubelet[3454]: E0924 01:24:04.903946    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141044903657217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:04 no-preload-674057 kubelet[3454]: E0924 01:24:04.903997    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141044903657217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:05 no-preload-674057 kubelet[3454]: E0924 01:24:05.680225    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:24:14 no-preload-674057 kubelet[3454]: E0924 01:24:14.909139    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141054905682430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:14 no-preload-674057 kubelet[3454]: E0924 01:24:14.910303    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141054905682430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:20 no-preload-674057 kubelet[3454]: E0924 01:24:20.680781    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:24:24 no-preload-674057 kubelet[3454]: E0924 01:24:24.913328    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141064912901757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:24 no-preload-674057 kubelet[3454]: E0924 01:24:24.913736    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141064912901757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:34 no-preload-674057 kubelet[3454]: E0924 01:24:34.915543    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141074914871810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:34 no-preload-674057 kubelet[3454]: E0924 01:24:34.915587    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141074914871810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:35 no-preload-674057 kubelet[3454]: E0924 01:24:35.680115    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:24:44 no-preload-674057 kubelet[3454]: E0924 01:24:44.722234    3454 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 24 01:24:44 no-preload-674057 kubelet[3454]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 24 01:24:44 no-preload-674057 kubelet[3454]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 24 01:24:44 no-preload-674057 kubelet[3454]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 24 01:24:44 no-preload-674057 kubelet[3454]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 24 01:24:44 no-preload-674057 kubelet[3454]: E0924 01:24:44.917855    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141084917497025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:44 no-preload-674057 kubelet[3454]: E0924 01:24:44.917884    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141084917497025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:46 no-preload-674057 kubelet[3454]: E0924 01:24:46.680740    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	Sep 24 01:24:54 no-preload-674057 kubelet[3454]: E0924 01:24:54.920260    3454 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141094919778151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:24:54 no-preload-674057 kubelet[3454]: E0924 01:24:54.920282    3454 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141094919778151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 24 01:25:01 no-preload-674057 kubelet[3454]: E0924 01:25:01.679992    3454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w5j2x" podUID="57fd868f-ab5c-495a-869a-45e8f81f4014"
	
	
	==> storage-provisioner [edf08e56311a79e955d8c3b3e5c0237e909241ae5ed6abafb9b223a0f00c867a] <==
	I0924 01:10:52.097886       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 01:10:52.110113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 01:10:52.110199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 01:10:52.145284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 01:10:52.145438       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-674057_e6022092-b597-4237-8623-89f31e133c06!
	I0924 01:10:52.146662       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23dbfa5e-f111-467a-8bd0-0b4f1c87cad7", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-674057_e6022092-b597-4237-8623-89f31e133c06 became leader
	I0924 01:10:52.245594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-674057_e6022092-b597-4237-8623-89f31e133c06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-674057 -n no-preload-674057
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-674057 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w5j2x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-674057 describe pod metrics-server-6867b74b74-w5j2x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-674057 describe pod metrics-server-6867b74b74-w5j2x: exit status 1 (63.068795ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w5j2x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-674057 describe pod metrics-server-6867b74b74-w5j2x: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (302.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (168.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
E0924 01:23:38.361612   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.3:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (242.217481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-171598" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-171598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-171598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.877µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-171598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (249.398723ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-171598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-171598 logs -n 25: (1.730173931s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-075175                              | stopped-upgrade-075175       | jenkins | v1.34.0 | 24 Sep 24 00:54 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-619300                           | kubernetes-upgrade-619300    | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:55 UTC |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:55 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-674057             | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-811247                              | cert-expiration-811247       | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-319683 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | disable-driver-mounts-319683                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:57 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-650507            | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC | 24 Sep 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-465341  | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC | 24 Sep 24 00:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:57 UTC |                     |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-674057                  | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-674057                                   | no-preload-674057            | jenkins | v1.34.0 | 24 Sep 24 00:58 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-650507                 | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-171598        | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-650507                                  | embed-certs-650507           | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC | 24 Sep 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-465341       | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-465341 | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:08 UTC |
	|         | default-k8s-diff-port-465341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-171598             | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC | 24 Sep 24 01:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-171598                              | old-k8s-version-171598       | jenkins | v1.34.0 | 24 Sep 24 01:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:00:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:00:40.983605   61989 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:00:40.983716   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983722   61989 out.go:358] Setting ErrFile to fd 2...
	I0924 01:00:40.983728   61989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:00:40.983918   61989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 01:00:40.984500   61989 out.go:352] Setting JSON to false
	I0924 01:00:40.985412   61989 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6185,"bootTime":1727133456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 01:00:40.985513   61989 start.go:139] virtualization: kvm guest
	I0924 01:00:40.987848   61989 out.go:177] * [old-k8s-version-171598] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 01:00:40.989366   61989 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:00:40.989467   61989 notify.go:220] Checking for updates...
	I0924 01:00:40.992462   61989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:00:40.994144   61989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:00:40.995782   61989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 01:00:40.997503   61989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 01:00:40.999038   61989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:00:41.000959   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:00:41.001315   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.001388   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.017304   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0924 01:00:41.017751   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.018320   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.018355   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.018708   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.018964   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.021075   61989 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:00:41.022764   61989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:00:41.023156   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:00:41.023204   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:00:41.038764   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0924 01:00:41.039238   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:00:41.039828   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:00:41.039856   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:00:41.040272   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:00:41.040569   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:00:41.078622   61989 out.go:177] * Using the kvm2 driver based on existing profile
	I0924 01:00:41.079930   61989 start.go:297] selected driver: kvm2
	I0924 01:00:41.079945   61989 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.080076   61989 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:00:41.080841   61989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.080927   61989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0924 01:00:41.096851   61989 install.go:137] /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0924 01:00:41.097306   61989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:00:41.097345   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:00:41.097410   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:00:41.097465   61989 start.go:340] cluster config:
	{Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:00:41.097610   61989 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:00:41.099797   61989 out.go:177] * Starting "old-k8s-version-171598" primary control-plane node in "old-k8s-version-171598" cluster
	I0924 01:00:39.376584   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:41.101644   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:00:41.101691   61989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0924 01:00:41.101704   61989 cache.go:56] Caching tarball of preloaded images
	I0924 01:00:41.101801   61989 preload.go:172] Found /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0924 01:00:41.101816   61989 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0924 01:00:41.101922   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:00:41.102126   61989 start.go:360] acquireMachinesLock for old-k8s-version-171598: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:00:45.456606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:48.528618   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:54.608639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:00:57.680645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:03.760641   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:06.832676   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:12.912635   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:15.984629   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:22.064669   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:25.136609   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:31.216643   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:34.288667   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:40.368636   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:43.440700   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:49.520634   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:52.592658   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:01:58.672637   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:01.744679   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:07.824597   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:10.896693   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:16.976656   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:20.048675   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:26.128638   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:29.200595   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:35.280645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:38.352665   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:44.432606   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:47.504721   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:53.584645   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:02:56.656617   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:02.736686   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:05.808671   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:11.888586   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:14.960688   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:21.040639   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:24.112705   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:30.192631   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:33.264655   61070 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.161:22: connect: no route to host
	I0924 01:03:36.269218   61323 start.go:364] duration metric: took 4m25.932369998s to acquireMachinesLock for "embed-certs-650507"
	I0924 01:03:36.269290   61323 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:36.269298   61323 fix.go:54] fixHost starting: 
	I0924 01:03:36.269661   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:36.269714   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:36.285429   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0924 01:03:36.285943   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:36.286516   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:03:36.286557   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:36.286885   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:36.287078   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:36.287213   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:03:36.288895   61323 fix.go:112] recreateIfNeeded on embed-certs-650507: state=Stopped err=<nil>
	I0924 01:03:36.288917   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	W0924 01:03:36.289113   61323 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:36.291435   61323 out.go:177] * Restarting existing kvm2 VM for "embed-certs-650507" ...
	I0924 01:03:36.266390   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:36.266435   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.266788   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:03:36.266816   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:03:36.267022   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:03:36.269105   61070 machine.go:96] duration metric: took 4m37.426687547s to provisionDockerMachine
	I0924 01:03:36.269142   61070 fix.go:56] duration metric: took 4m37.448766856s for fixHost
	I0924 01:03:36.269148   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 4m37.448847609s
	W0924 01:03:36.269167   61070 start.go:714] error starting host: provision: host is not running
	W0924 01:03:36.269264   61070 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0924 01:03:36.269274   61070 start.go:729] Will try again in 5 seconds ...
	I0924 01:03:36.293006   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Start
	I0924 01:03:36.293199   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring networks are active...
	I0924 01:03:36.294032   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network default is active
	I0924 01:03:36.294359   61323 main.go:141] libmachine: (embed-certs-650507) Ensuring network mk-embed-certs-650507 is active
	I0924 01:03:36.294718   61323 main.go:141] libmachine: (embed-certs-650507) Getting domain xml...
	I0924 01:03:36.295407   61323 main.go:141] libmachine: (embed-certs-650507) Creating domain...
	I0924 01:03:37.516049   61323 main.go:141] libmachine: (embed-certs-650507) Waiting to get IP...
	I0924 01:03:37.516959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.517374   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.517443   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.517352   62594 retry.go:31] will retry after 278.072635ms: waiting for machine to come up
	I0924 01:03:37.796796   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:37.797276   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:37.797301   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:37.797242   62594 retry.go:31] will retry after 387.413297ms: waiting for machine to come up
	I0924 01:03:38.185869   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.186239   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.186258   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.186193   62594 retry.go:31] will retry after 363.798568ms: waiting for machine to come up
	I0924 01:03:38.551772   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.552181   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.552221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.552122   62594 retry.go:31] will retry after 392.798012ms: waiting for machine to come up
	I0924 01:03:38.946523   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:38.947069   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:38.947097   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:38.947018   62594 retry.go:31] will retry after 541.413772ms: waiting for machine to come up
	I0924 01:03:39.489873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:39.490278   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:39.490307   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:39.490226   62594 retry.go:31] will retry after 804.62107ms: waiting for machine to come up
	I0924 01:03:41.271024   61070 start.go:360] acquireMachinesLock for no-preload-674057: {Name:mkdd0eb053efc5a47fd01c8410d7f603dccb8d0c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 01:03:40.296290   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:40.296775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:40.296806   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:40.296726   62594 retry.go:31] will retry after 882.018637ms: waiting for machine to come up
	I0924 01:03:41.180799   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:41.181242   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:41.181263   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:41.181197   62594 retry.go:31] will retry after 961.194045ms: waiting for machine to come up
	I0924 01:03:42.143878   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:42.144354   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:42.144379   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:42.144270   62594 retry.go:31] will retry after 1.647837023s: waiting for machine to come up
	I0924 01:03:43.793458   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:43.793892   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:43.793933   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:43.793873   62594 retry.go:31] will retry after 1.751902059s: waiting for machine to come up
	I0924 01:03:45.547905   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:45.548356   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:45.548388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:45.548313   62594 retry.go:31] will retry after 2.380106471s: waiting for machine to come up
	I0924 01:03:47.931021   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:47.931513   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:47.931537   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:47.931456   62594 retry.go:31] will retry after 2.395516641s: waiting for machine to come up
	I0924 01:03:50.328214   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:50.328766   61323 main.go:141] libmachine: (embed-certs-650507) DBG | unable to find current IP address of domain embed-certs-650507 in network mk-embed-certs-650507
	I0924 01:03:50.328791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | I0924 01:03:50.328729   62594 retry.go:31] will retry after 4.41219579s: waiting for machine to come up
	I0924 01:03:54.745159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745572   61323 main.go:141] libmachine: (embed-certs-650507) Found IP for machine: 192.168.39.104
	I0924 01:03:54.745606   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has current primary IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.745615   61323 main.go:141] libmachine: (embed-certs-650507) Reserving static IP address...
	I0924 01:03:54.746020   61323 main.go:141] libmachine: (embed-certs-650507) Reserved static IP address: 192.168.39.104
	I0924 01:03:54.746042   61323 main.go:141] libmachine: (embed-certs-650507) Waiting for SSH to be available...
	I0924 01:03:54.746067   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.746134   61323 main.go:141] libmachine: (embed-certs-650507) DBG | skip adding static IP to network mk-embed-certs-650507 - found existing host DHCP lease matching {name: "embed-certs-650507", mac: "52:54:00:46:07:2d", ip: "192.168.39.104"}
	I0924 01:03:54.746159   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Getting to WaitForSSH function...
	I0924 01:03:54.748464   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.748871   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.748906   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.749083   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH client type: external
	I0924 01:03:54.749118   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa (-rw-------)
	I0924 01:03:54.749153   61323 main.go:141] libmachine: (embed-certs-650507) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:03:54.749165   61323 main.go:141] libmachine: (embed-certs-650507) DBG | About to run SSH command:
	I0924 01:03:54.749177   61323 main.go:141] libmachine: (embed-certs-650507) DBG | exit 0
	I0924 01:03:54.872532   61323 main.go:141] libmachine: (embed-certs-650507) DBG | SSH cmd err, output: <nil>: 
	I0924 01:03:54.872869   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetConfigRaw
	I0924 01:03:54.873480   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:54.876545   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.876922   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.876953   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.877204   61323 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/config.json ...
	I0924 01:03:54.877443   61323 machine.go:93] provisionDockerMachine start ...
	I0924 01:03:54.877467   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:54.877683   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.879873   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880200   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.880221   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.880375   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.880546   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880681   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.880866   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.881002   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.881194   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.881207   61323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:03:54.984605   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:03:54.984636   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.984922   61323 buildroot.go:166] provisioning hostname "embed-certs-650507"
	I0924 01:03:54.984948   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:54.985185   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:54.988284   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988699   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:54.988725   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:54.988857   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:54.989069   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989344   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:54.989529   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:54.989731   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:54.989899   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:54.989913   61323 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-650507 && echo "embed-certs-650507" | sudo tee /etc/hostname
	I0924 01:03:55.106214   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-650507
	
	I0924 01:03:55.106273   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.109000   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109310   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.109334   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.109498   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.109646   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.109989   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.110123   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.110303   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.110318   61323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-650507' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-650507/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-650507' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:03:55.220699   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:03:55.220738   61323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:03:55.220755   61323 buildroot.go:174] setting up certificates
	I0924 01:03:55.220763   61323 provision.go:84] configureAuth start
	I0924 01:03:55.220771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetMachineName
	I0924 01:03:55.221112   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.224166   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224603   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.224634   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.224839   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.226847   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227167   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.227194   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.227308   61323 provision.go:143] copyHostCerts
	I0924 01:03:55.227386   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:03:55.227409   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:03:55.227490   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:03:55.227641   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:03:55.227653   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:03:55.227695   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:03:55.227781   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:03:55.227791   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:03:55.227826   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:03:55.227909   61323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.embed-certs-650507 san=[127.0.0.1 192.168.39.104 embed-certs-650507 localhost minikube]
	I0924 01:03:55.917061   61699 start.go:364] duration metric: took 3m46.693519233s to acquireMachinesLock for "default-k8s-diff-port-465341"
	I0924 01:03:55.917135   61699 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:03:55.917144   61699 fix.go:54] fixHost starting: 
	I0924 01:03:55.917553   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:03:55.917606   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:03:55.937566   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0924 01:03:55.937971   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:03:55.938529   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:03:55.938556   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:03:55.938923   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:03:55.939182   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:03:55.939365   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:03:55.941155   61699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-465341: state=Stopped err=<nil>
	I0924 01:03:55.941197   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	W0924 01:03:55.941417   61699 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:03:55.943640   61699 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-465341" ...
	I0924 01:03:55.309866   61323 provision.go:177] copyRemoteCerts
	I0924 01:03:55.309928   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:03:55.309955   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.312946   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313365   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.313388   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.313638   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.313889   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.314062   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.314206   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.394427   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:03:55.420595   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 01:03:55.444377   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:03:55.467261   61323 provision.go:87] duration metric: took 246.485242ms to configureAuth
	I0924 01:03:55.467302   61323 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:03:55.467483   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:03:55.467552   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.470146   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470539   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.470572   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.470719   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.470961   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471101   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.471299   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.471450   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.471653   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.471676   61323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:03:55.688189   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:03:55.688218   61323 machine.go:96] duration metric: took 810.761675ms to provisionDockerMachine
	I0924 01:03:55.688230   61323 start.go:293] postStartSetup for "embed-certs-650507" (driver="kvm2")
	I0924 01:03:55.688244   61323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:03:55.688266   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.688659   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:03:55.688690   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.691375   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691761   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.691791   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.691881   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.692105   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.692309   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.692453   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.775412   61323 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:03:55.779423   61323 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:03:55.779448   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:03:55.779536   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:03:55.779629   61323 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:03:55.779742   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:03:55.788717   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:03:55.811673   61323 start.go:296] duration metric: took 123.428914ms for postStartSetup
	I0924 01:03:55.811717   61323 fix.go:56] duration metric: took 19.542419045s for fixHost
	I0924 01:03:55.811743   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.814745   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815034   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.815062   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.815247   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.815449   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.815851   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.816012   61323 main.go:141] libmachine: Using SSH client type: native
	I0924 01:03:55.816168   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0924 01:03:55.816178   61323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:03:55.916845   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139835.894204557
	
	I0924 01:03:55.916883   61323 fix.go:216] guest clock: 1727139835.894204557
	I0924 01:03:55.916896   61323 fix.go:229] Guest: 2024-09-24 01:03:55.894204557 +0000 UTC Remote: 2024-09-24 01:03:55.811721448 +0000 UTC m=+285.612741728 (delta=82.483109ms)
	I0924 01:03:55.916935   61323 fix.go:200] guest clock delta is within tolerance: 82.483109ms
	I0924 01:03:55.916945   61323 start.go:83] releasing machines lock for "embed-certs-650507", held for 19.6476761s
	I0924 01:03:55.916990   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.917314   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:55.920105   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920550   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.920583   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.920832   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921327   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921510   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:03:55.921578   61323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:03:55.921634   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.921747   61323 ssh_runner.go:195] Run: cat /version.json
	I0924 01:03:55.921771   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:03:55.924238   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924430   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924717   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924741   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924775   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:55.924792   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:55.924953   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925061   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:03:55.925153   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925277   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:03:55.925360   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925439   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:03:55.925582   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:55.925626   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:03:56.005229   61323 ssh_runner.go:195] Run: systemctl --version
	I0924 01:03:56.046189   61323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:03:56.187701   61323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:03:56.193313   61323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:03:56.193379   61323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:03:56.209278   61323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:03:56.209298   61323 start.go:495] detecting cgroup driver to use...
	I0924 01:03:56.209363   61323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:03:56.226995   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:03:56.241102   61323 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:03:56.241160   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:03:56.255002   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:03:56.269805   61323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:03:56.387382   61323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:03:56.545138   61323 docker.go:233] disabling docker service ...
	I0924 01:03:56.545220   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:03:56.559017   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:03:56.571939   61323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:03:56.694139   61323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:03:56.811253   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:03:56.825480   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:03:56.842777   61323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:03:56.842830   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.852387   61323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:03:56.852447   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.862702   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.872790   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.882864   61323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:03:56.893029   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.903314   61323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.923491   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:03:56.933424   61323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:03:56.944496   61323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:03:56.944561   61323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:03:56.957077   61323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:03:56.968602   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:03:57.080955   61323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:03:57.179826   61323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:03:57.179900   61323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:03:57.184652   61323 start.go:563] Will wait 60s for crictl version
	I0924 01:03:57.184716   61323 ssh_runner.go:195] Run: which crictl
	I0924 01:03:57.190300   61323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:03:57.239310   61323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:03:57.239371   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.266833   61323 ssh_runner.go:195] Run: crio --version
	I0924 01:03:57.301876   61323 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:03:55.945290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Start
	I0924 01:03:55.945498   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring networks are active...
	I0924 01:03:55.946346   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network default is active
	I0924 01:03:55.946726   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Ensuring network mk-default-k8s-diff-port-465341 is active
	I0924 01:03:55.947152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Getting domain xml...
	I0924 01:03:55.947872   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Creating domain...
	I0924 01:03:57.236194   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting to get IP...
	I0924 01:03:57.237037   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237445   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.237497   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.237413   62713 retry.go:31] will retry after 286.244795ms: waiting for machine to come up
	I0924 01:03:57.525009   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525595   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.525621   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.525548   62713 retry.go:31] will retry after 273.807213ms: waiting for machine to come up
	I0924 01:03:57.801217   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801734   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:57.801756   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:57.801701   62713 retry.go:31] will retry after 371.291567ms: waiting for machine to come up
	I0924 01:03:58.174283   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174746   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.174781   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.174692   62713 retry.go:31] will retry after 595.157579ms: waiting for machine to come up
	I0924 01:03:58.771428   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771900   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:58.771925   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:58.771862   62713 retry.go:31] will retry after 734.305784ms: waiting for machine to come up
	I0924 01:03:57.303135   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetIP
	I0924 01:03:57.306110   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306598   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:03:57.306624   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:03:57.306783   61323 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0924 01:03:57.310829   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:03:57.322605   61323 kubeadm.go:883] updating cluster {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:03:57.322715   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:03:57.322761   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:03:57.358040   61323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:03:57.358104   61323 ssh_runner.go:195] Run: which lz4
	I0924 01:03:57.361948   61323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:03:57.365911   61323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:03:57.365950   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:03:58.651636   61323 crio.go:462] duration metric: took 1.289721413s to copy over tarball
	I0924 01:03:58.651708   61323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:03:59.507803   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508308   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:03:59.508356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:03:59.508237   62713 retry.go:31] will retry after 875.394603ms: waiting for machine to come up
	I0924 01:04:00.385279   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385713   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:00.385748   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:00.385655   62713 retry.go:31] will retry after 885.980109ms: waiting for machine to come up
	I0924 01:04:01.273114   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273545   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:01.273590   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:01.273535   62713 retry.go:31] will retry after 935.451975ms: waiting for machine to come up
	I0924 01:04:02.210920   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:02.211423   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:02.211331   62713 retry.go:31] will retry after 1.254573538s: waiting for machine to come up
	I0924 01:04:03.467027   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467593   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:03.467626   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:03.467488   62713 retry.go:31] will retry after 2.044247818s: waiting for machine to come up
	I0924 01:04:00.805580   61323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153837858s)
	I0924 01:04:00.805608   61323 crio.go:469] duration metric: took 2.153947595s to extract the tarball
	I0924 01:04:00.805617   61323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:00.846074   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:00.895803   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:00.895833   61323 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:00.895842   61323 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I0924 01:04:00.895966   61323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-650507 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:00.896041   61323 ssh_runner.go:195] Run: crio config
	I0924 01:04:00.941958   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:00.941985   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:00.941998   61323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:00.942029   61323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-650507 NodeName:embed-certs-650507 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:00.942202   61323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-650507"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:00.942292   61323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:00.952748   61323 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:00.952853   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:00.962984   61323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0924 01:04:00.980030   61323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:01.001571   61323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0924 01:04:01.018760   61323 ssh_runner.go:195] Run: grep 192.168.39.104	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:01.022770   61323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:01.034816   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:01.157888   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:01.175883   61323 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507 for IP: 192.168.39.104
	I0924 01:04:01.175911   61323 certs.go:194] generating shared ca certs ...
	I0924 01:04:01.175937   61323 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:01.176134   61323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:01.176198   61323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:01.176211   61323 certs.go:256] generating profile certs ...
	I0924 01:04:01.176324   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/client.key
	I0924 01:04:01.176441   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key.86682f38
	I0924 01:04:01.176515   61323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key
	I0924 01:04:01.176640   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:01.176669   61323 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:01.176678   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:01.176713   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:01.176749   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:01.176778   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:01.176987   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:01.177918   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:01.221682   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:01.266005   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:01.299467   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:01.324598   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0924 01:04:01.349526   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:01.385589   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:01.409713   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/embed-certs-650507/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:04:01.433745   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:01.457493   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:01.482197   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:01.505740   61323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:01.524029   61323 ssh_runner.go:195] Run: openssl version
	I0924 01:04:01.530147   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:01.541117   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545823   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.545894   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:01.551638   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:01.562373   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:01.573502   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578561   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.578634   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:01.584415   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:01.595312   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:01.606503   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611530   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.611602   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:01.618484   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:01.629332   61323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:01.634238   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:01.640266   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:01.646306   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:01.652510   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:01.658237   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:01.663962   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:01.669998   61323 kubeadm.go:392] StartCluster: {Name:embed-certs-650507 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-650507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:01.670105   61323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:01.670162   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.706478   61323 cri.go:89] found id: ""
	I0924 01:04:01.706555   61323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:01.717106   61323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:01.717127   61323 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:01.717188   61323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:01.729966   61323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:01.730947   61323 kubeconfig.go:125] found "embed-certs-650507" server: "https://192.168.39.104:8443"
	I0924 01:04:01.732933   61323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:01.745538   61323 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.104
	I0924 01:04:01.745581   61323 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:01.745594   61323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:01.745649   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:01.783313   61323 cri.go:89] found id: ""
	I0924 01:04:01.783423   61323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:01.801432   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:01.811282   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:01.811308   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:01.811371   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:01.820717   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:01.820780   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:01.830289   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:01.839383   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:01.839449   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:01.848920   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.857986   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:01.858045   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:01.867465   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:01.876598   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:01.876680   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:01.886122   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:01.896245   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:02.004839   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.077983   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073104284s)
	I0924 01:04:03.078020   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.295254   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.369968   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:03.458283   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:03.458383   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:03.958648   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.459039   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.958614   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:04.994450   61323 api_server.go:72] duration metric: took 1.536167442s to wait for apiserver process to appear ...
	I0924 01:04:04.994485   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:04.994530   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:04.995139   61323 api_server.go:269] stopped: https://192.168.39.104:8443/healthz: Get "https://192.168.39.104:8443/healthz": dial tcp 192.168.39.104:8443: connect: connection refused
	I0924 01:04:05.513732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514247   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:05.514275   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:05.514201   62713 retry.go:31] will retry after 2.814717647s: waiting for machine to come up
	I0924 01:04:08.331550   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331964   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:08.331983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:08.331932   62713 retry.go:31] will retry after 2.942261445s: waiting for machine to come up
	I0924 01:04:05.495090   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:07.946057   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:07.946116   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:07.946135   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.018665   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.018711   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.018729   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.027105   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.027144   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.494630   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:08.500471   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:08.500494   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:08.995055   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.017236   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:09.017272   61323 api_server.go:103] status: https://192.168.39.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:09.494769   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:04:09.500285   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:04:09.507440   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:09.507470   61323 api_server.go:131] duration metric: took 4.512953508s to wait for apiserver health ...
	I0924 01:04:09.507478   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:04:09.507485   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:09.509661   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:09.511104   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:09.529080   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:09.567695   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:09.579425   61323 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:09.579470   61323 system_pods.go:61] "coredns-7c65d6cfc9-xgs6g" [b975196f-e9e6-4e30-a49b-8d3031f73a21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:09.579489   61323 system_pods.go:61] "etcd-embed-certs-650507" [c24d7e21-08a8-42bd-9def-1808d8a58e07] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:09.579501   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f1de6ed5-a87f-4d1d-8feb-d0f80851b5b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:09.579509   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [d0d454bf-b9d3-4dcb-957c-f1329e4e9e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:09.579516   61323 system_pods.go:61] "kube-proxy-qd4lg" [f06c009f-3c62-4e54-82fd-ca468fb05bbc] Running
	I0924 01:04:09.579523   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [e4931370-821e-4289-9b2b-9b46d9f8394e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:09.579532   61323 system_pods.go:61] "metrics-server-6867b74b74-pc28v" [688d7bbe-9fee-450f-aecf-bbb3413a3633] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:09.579536   61323 system_pods.go:61] "storage-provisioner" [9e354a3c-e4f1-46e1-b5fb-de8243f41c29] Running
	I0924 01:04:09.579542   61323 system_pods.go:74] duration metric: took 11.824796ms to wait for pod list to return data ...
	I0924 01:04:09.579550   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:09.584175   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:09.584203   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:09.584214   61323 node_conditions.go:105] duration metric: took 4.659859ms to run NodePressure ...
	I0924 01:04:09.584230   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:09.847130   61323 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:09.851985   61323 kubeadm.go:739] kubelet initialised
	I0924 01:04:09.852008   61323 kubeadm.go:740] duration metric: took 4.853319ms waiting for restarted kubelet to initialise ...
	I0924 01:04:09.852015   61323 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:09.857149   61323 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:11.275680   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276135   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | unable to find current IP address of domain default-k8s-diff-port-465341 in network mk-default-k8s-diff-port-465341
	I0924 01:04:11.276166   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | I0924 01:04:11.276102   62713 retry.go:31] will retry after 3.599939746s: waiting for machine to come up
	I0924 01:04:11.865712   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:13.864779   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:13.864801   61323 pod_ready.go:82] duration metric: took 4.007625744s for pod "coredns-7c65d6cfc9-xgs6g" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:13.864809   61323 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.233175   61989 start.go:364] duration metric: took 3m35.131018203s to acquireMachinesLock for "old-k8s-version-171598"
	I0924 01:04:16.233254   61989 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:16.233262   61989 fix.go:54] fixHost starting: 
	I0924 01:04:16.233733   61989 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:16.233787   61989 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:16.255690   61989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0924 01:04:16.256135   61989 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:16.256729   61989 main.go:141] libmachine: Using API Version  1
	I0924 01:04:16.256763   61989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:16.257122   61989 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:16.257365   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:16.257560   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetState
	I0924 01:04:16.259055   61989 fix.go:112] recreateIfNeeded on old-k8s-version-171598: state=Stopped err=<nil>
	I0924 01:04:16.259091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	W0924 01:04:16.259266   61989 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:16.261327   61989 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-171598" ...
	I0924 01:04:14.879977   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880533   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has current primary IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.880563   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Found IP for machine: 192.168.61.186
	I0924 01:04:14.880596   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserving static IP address...
	I0924 01:04:14.881148   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.881171   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | skip adding static IP to network mk-default-k8s-diff-port-465341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-465341", mac: "52:54:00:e4:1f:79", ip: "192.168.61.186"}
	I0924 01:04:14.881188   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Reserved static IP address: 192.168.61.186
	I0924 01:04:14.881216   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Waiting for SSH to be available...
	I0924 01:04:14.881229   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Getting to WaitForSSH function...
	I0924 01:04:14.883679   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884060   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:14.884083   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:14.884214   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH client type: external
	I0924 01:04:14.884248   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa (-rw-------)
	I0924 01:04:14.884276   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:14.884287   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | About to run SSH command:
	I0924 01:04:14.884298   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | exit 0
	I0924 01:04:15.012764   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:15.013163   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetConfigRaw
	I0924 01:04:15.013983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.016664   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017173   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.017207   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.017440   61699 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/config.json ...
	I0924 01:04:15.017668   61699 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:15.017687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.017915   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.020388   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.020816   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.020839   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.021074   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.021249   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021513   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.021681   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.021850   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.022031   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.022041   61699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:15.132672   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:15.132706   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.132994   61699 buildroot.go:166] provisioning hostname "default-k8s-diff-port-465341"
	I0924 01:04:15.133025   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.133268   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.135929   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136371   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.136399   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.136578   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.136850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137008   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.137193   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.137407   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.137589   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.137610   61699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-465341 && echo "default-k8s-diff-port-465341" | sudo tee /etc/hostname
	I0924 01:04:15.262142   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-465341
	
	I0924 01:04:15.262174   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.265359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265736   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.265761   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.265962   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.266176   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266335   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.266510   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.266705   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.266903   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.266926   61699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-465341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-465341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-465341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:15.385085   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:15.385122   61699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:15.385158   61699 buildroot.go:174] setting up certificates
	I0924 01:04:15.385174   61699 provision.go:84] configureAuth start
	I0924 01:04:15.385186   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetMachineName
	I0924 01:04:15.385556   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:15.388350   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388798   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.388828   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.388985   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.391478   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391793   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.391823   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.391952   61699 provision.go:143] copyHostCerts
	I0924 01:04:15.392016   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:15.392045   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:15.392115   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:15.392259   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:15.392272   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:15.392306   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:15.392406   61699 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:15.392415   61699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:15.392440   61699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:15.392503   61699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-465341 san=[127.0.0.1 192.168.61.186 default-k8s-diff-port-465341 localhost minikube]
	I0924 01:04:15.572588   61699 provision.go:177] copyRemoteCerts
	I0924 01:04:15.572682   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:15.572718   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.575884   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576356   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.576401   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.576627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.576868   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.577099   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.577248   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:15.662231   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:15.686800   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0924 01:04:15.709860   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:04:15.738063   61699 provision.go:87] duration metric: took 352.876914ms to configureAuth
	I0924 01:04:15.738105   61699 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:15.738302   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:15.738420   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.741231   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741644   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.741693   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.741835   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.742036   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742218   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.742359   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.742526   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:15.742727   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:15.742754   61699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:15.986096   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:15.986128   61699 machine.go:96] duration metric: took 968.446778ms to provisionDockerMachine
	I0924 01:04:15.986143   61699 start.go:293] postStartSetup for "default-k8s-diff-port-465341" (driver="kvm2")
	I0924 01:04:15.986156   61699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:15.986183   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:15.986639   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:15.986674   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:15.989692   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990094   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:15.990124   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:15.990407   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:15.990643   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:15.990826   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:15.990958   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.079174   61699 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:16.083139   61699 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:16.083168   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:16.083251   61699 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:16.083363   61699 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:16.083486   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:16.094571   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:16.117327   61699 start.go:296] duration metric: took 131.16913ms for postStartSetup
	I0924 01:04:16.117364   61699 fix.go:56] duration metric: took 20.200222398s for fixHost
	I0924 01:04:16.117384   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.120507   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.120857   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.120899   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.121059   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.121325   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121511   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.121687   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.121901   61699 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:16.122100   61699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.186 22 <nil> <nil>}
	I0924 01:04:16.122113   61699 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:16.232986   61699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139856.205476339
	
	I0924 01:04:16.233013   61699 fix.go:216] guest clock: 1727139856.205476339
	I0924 01:04:16.233024   61699 fix.go:229] Guest: 2024-09-24 01:04:16.205476339 +0000 UTC Remote: 2024-09-24 01:04:16.117368802 +0000 UTC m=+247.038042336 (delta=88.107537ms)
	I0924 01:04:16.233086   61699 fix.go:200] guest clock delta is within tolerance: 88.107537ms
	I0924 01:04:16.233094   61699 start.go:83] releasing machines lock for "default-k8s-diff-port-465341", held for 20.315992151s
	I0924 01:04:16.233133   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.233491   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:16.236719   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237104   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.237134   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.237290   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.237850   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238019   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:16.238116   61699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:16.238167   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.238227   61699 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:16.238260   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:16.241123   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241448   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241598   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241627   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:16.241757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:16.241916   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.241982   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:16.242152   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242225   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:16.242351   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242479   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:16.242543   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.242880   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:16.368841   61699 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:16.374990   61699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:16.521604   61699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:16.527198   61699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:16.527290   61699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:16.543251   61699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:16.543278   61699 start.go:495] detecting cgroup driver to use...
	I0924 01:04:16.543357   61699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:16.561775   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:16.576028   61699 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:16.576097   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:16.591757   61699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:16.607927   61699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:16.753944   61699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:16.917338   61699 docker.go:233] disabling docker service ...
	I0924 01:04:16.917401   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:16.935104   61699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:16.949717   61699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:17.088275   61699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:17.222093   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:17.236370   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:17.256277   61699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:17.256360   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.266516   61699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:17.266600   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.276647   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.288283   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.299232   61699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:17.311336   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.329416   61699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.351465   61699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:17.362248   61699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:17.372102   61699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:17.372154   61699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:17.392055   61699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:17.413641   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:17.541224   61699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:17.655205   61699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:17.655281   61699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:17.660096   61699 start.go:563] Will wait 60s for crictl version
	I0924 01:04:17.660163   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:04:17.663880   61699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:17.706878   61699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:17.706959   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.735377   61699 ssh_runner.go:195] Run: crio --version
	I0924 01:04:17.766744   61699 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:17.768253   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetIP
	I0924 01:04:17.771534   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.771952   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:17.771983   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:17.772230   61699 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:17.776486   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:17.792599   61699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:17.792744   61699 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:17.792813   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:17.831837   61699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:17.831929   61699 ssh_runner.go:195] Run: which lz4
	I0924 01:04:17.836193   61699 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:17.840562   61699 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:17.840596   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0924 01:04:15.871512   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:15.871540   61323 pod_ready.go:82] duration metric: took 2.006723245s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:15.871552   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879872   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:17.879899   61323 pod_ready.go:82] duration metric: took 2.008337801s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:17.879918   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888007   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.888041   61323 pod_ready.go:82] duration metric: took 2.008114424s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.888056   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894805   61323 pod_ready.go:93] pod "kube-proxy-qd4lg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.894844   61323 pod_ready.go:82] duration metric: took 6.779022ms for pod "kube-proxy-qd4lg" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.894862   61323 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900353   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:19.900387   61323 pod_ready.go:82] duration metric: took 5.513733ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:19.900401   61323 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:16.262929   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .Start
	I0924 01:04:16.263123   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring networks are active...
	I0924 01:04:16.264062   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network default is active
	I0924 01:04:16.264543   61989 main.go:141] libmachine: (old-k8s-version-171598) Ensuring network mk-old-k8s-version-171598 is active
	I0924 01:04:16.264954   61989 main.go:141] libmachine: (old-k8s-version-171598) Getting domain xml...
	I0924 01:04:16.265899   61989 main.go:141] libmachine: (old-k8s-version-171598) Creating domain...
	I0924 01:04:17.566157   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting to get IP...
	I0924 01:04:17.567223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.567644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.567724   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.567625   62886 retry.go:31] will retry after 301.652575ms: waiting for machine to come up
	I0924 01:04:17.871163   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:17.871700   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:17.871729   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:17.871645   62886 retry.go:31] will retry after 337.632324ms: waiting for machine to come up
	I0924 01:04:18.211081   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.211954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.212013   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.211892   62886 retry.go:31] will retry after 431.70455ms: waiting for machine to come up
	I0924 01:04:18.645408   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:18.646017   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:18.646044   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:18.645958   62886 retry.go:31] will retry after 582.966569ms: waiting for machine to come up
	I0924 01:04:19.230457   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.230954   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.230980   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.230897   62886 retry.go:31] will retry after 720.62326ms: waiting for machine to come up
	I0924 01:04:19.953023   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:19.953570   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:19.953603   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:19.953512   62886 retry.go:31] will retry after 688.597177ms: waiting for machine to come up
	I0924 01:04:20.644150   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:20.644636   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:20.644672   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:20.644578   62886 retry.go:31] will retry after 1.084671138s: waiting for machine to come up
	I0924 01:04:19.165501   61699 crio.go:462] duration metric: took 1.329329949s to copy over tarball
	I0924 01:04:19.165575   61699 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:21.323478   61699 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.157877766s)
	I0924 01:04:21.323509   61699 crio.go:469] duration metric: took 2.157979404s to extract the tarball
	I0924 01:04:21.323516   61699 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:21.360397   61699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:21.401282   61699 crio.go:514] all images are preloaded for cri-o runtime.
	I0924 01:04:21.401309   61699 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:04:21.401319   61699 kubeadm.go:934] updating node { 192.168.61.186 8444 v1.31.1 crio true true} ...
	I0924 01:04:21.401441   61699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-465341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:21.401524   61699 ssh_runner.go:195] Run: crio config
	I0924 01:04:21.447706   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:21.447730   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:21.447741   61699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:21.447766   61699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.186 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-465341 NodeName:default-k8s-diff-port-465341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:04:21.447939   61699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-465341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:21.448022   61699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:04:21.457882   61699 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:21.457967   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:21.467329   61699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0924 01:04:21.483464   61699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:21.500880   61699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0924 01:04:21.517179   61699 ssh_runner.go:195] Run: grep 192.168.61.186	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:21.521032   61699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:21.532339   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:21.655583   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:21.671964   61699 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341 for IP: 192.168.61.186
	I0924 01:04:21.672019   61699 certs.go:194] generating shared ca certs ...
	I0924 01:04:21.672044   61699 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:21.672273   61699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:21.672390   61699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:21.672409   61699 certs.go:256] generating profile certs ...
	I0924 01:04:21.672536   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.key
	I0924 01:04:21.672629   61699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key.b6f5ff18
	I0924 01:04:21.672696   61699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key
	I0924 01:04:21.672940   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:21.672987   61699 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:21.672999   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:21.673029   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:21.673060   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:21.673091   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:21.673133   61699 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:21.673884   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:21.706165   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:21.735352   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:21.763358   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:21.786284   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0924 01:04:21.814844   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:21.839773   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:21.866549   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:21.889901   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:21.914875   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:21.939116   61699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:21.963264   61699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:21.980912   61699 ssh_runner.go:195] Run: openssl version
	I0924 01:04:21.986725   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:21.998128   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002832   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.002903   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:22.008847   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:22.019274   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:22.030110   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035920   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.035996   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:22.043505   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:22.057224   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:22.067596   61699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.071957   61699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.072029   61699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:22.077495   61699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:22.087627   61699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:22.092049   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:22.097908   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:22.103716   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:22.109871   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:22.116088   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:22.121760   61699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:22.127473   61699 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-465341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-465341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:22.127563   61699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:22.127613   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.167951   61699 cri.go:89] found id: ""
	I0924 01:04:22.168054   61699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:22.177878   61699 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:22.177898   61699 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:22.177949   61699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:22.187116   61699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:22.188577   61699 kubeconfig.go:125] found "default-k8s-diff-port-465341" server: "https://192.168.61.186:8444"
	I0924 01:04:22.191744   61699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:22.200936   61699 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.186
	I0924 01:04:22.200967   61699 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:22.200979   61699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:22.201039   61699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:22.247804   61699 cri.go:89] found id: ""
	I0924 01:04:22.247888   61699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:22.263853   61699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:22.273254   61699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:22.273271   61699 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:22.273327   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0924 01:04:22.281724   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:22.281790   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:22.290823   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0924 01:04:22.299422   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:22.299482   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:22.308961   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.317922   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:22.318010   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:22.326980   61699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0924 01:04:22.335995   61699 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:22.336084   61699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:22.345002   61699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:22.354302   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:22.462157   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.380163   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.610795   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.679134   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:23.747119   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:23.747191   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:21.909834   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:24.104163   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:21.730823   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:21.731385   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:21.731411   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:21.731351   62886 retry.go:31] will retry after 1.051424847s: waiting for machine to come up
	I0924 01:04:22.784644   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:22.785194   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:22.785223   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:22.785138   62886 retry.go:31] will retry after 1.750498954s: waiting for machine to come up
	I0924 01:04:24.537680   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:24.538085   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:24.538109   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:24.538039   62886 retry.go:31] will retry after 2.015183238s: waiting for machine to come up
	I0924 01:04:24.247859   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:24.748076   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.248220   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.747481   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:25.774137   61699 api_server.go:72] duration metric: took 2.027016323s to wait for apiserver process to appear ...
	I0924 01:04:25.774167   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:04:25.774194   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:25.774901   61699 api_server.go:269] stopped: https://192.168.61.186:8444/healthz: Get "https://192.168.61.186:8444/healthz": dial tcp 192.168.61.186:8444: connect: connection refused
	I0924 01:04:26.275226   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.290581   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.290621   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.290637   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.321353   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:04:28.321386   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:04:28.775068   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:28.779873   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:28.779896   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:26.408349   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:28.409816   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:26.555221   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:26.555674   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:26.555695   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:26.555634   62886 retry.go:31] will retry after 2.568414115s: waiting for machine to come up
	I0924 01:04:29.127625   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:29.128130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:29.128149   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:29.128108   62886 retry.go:31] will retry after 2.207252231s: waiting for machine to come up
	I0924 01:04:29.275326   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.284304   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.284360   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:29.774975   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:29.779470   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:29.779503   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.275137   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.279256   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.279287   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:30.774874   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:30.779081   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:30.779110   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.275163   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.279417   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:04:31.279446   61699 api_server.go:103] status: https://192.168.61.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:04:31.775022   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:04:31.780092   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:04:31.787643   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:04:31.787672   61699 api_server.go:131] duration metric: took 6.013498176s to wait for apiserver health ...
	I0924 01:04:31.787680   61699 cni.go:84] Creating CNI manager for ""
	I0924 01:04:31.787686   61699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:31.789733   61699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:04:31.791140   61699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:04:31.801441   61699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:04:31.819890   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:04:31.828128   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:04:31.828160   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:04:31.828168   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:04:31.828177   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:04:31.828186   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:04:31.828191   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:04:31.828196   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0924 01:04:31.828200   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:04:31.828203   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:04:31.828209   61699 system_pods.go:74] duration metric: took 8.300337ms to wait for pod list to return data ...
	I0924 01:04:31.828215   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:04:31.831528   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:04:31.831550   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:04:31.831561   61699 node_conditions.go:105] duration metric: took 3.341719ms to run NodePressure ...
	I0924 01:04:31.831576   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:32.101590   61699 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105656   61699 kubeadm.go:739] kubelet initialised
	I0924 01:04:32.105679   61699 kubeadm.go:740] duration metric: took 4.062709ms waiting for restarted kubelet to initialise ...
	I0924 01:04:32.105691   61699 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:32.110237   61699 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.115057   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115090   61699 pod_ready.go:82] duration metric: took 4.825694ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.115102   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.115110   61699 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.119506   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119534   61699 pod_ready.go:82] duration metric: took 4.415876ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.119546   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.119558   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.124199   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124248   61699 pod_ready.go:82] duration metric: took 4.660764ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.124266   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.124285   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.223553   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223596   61699 pod_ready.go:82] duration metric: took 99.284751ms for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.223606   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.223613   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:32.622500   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622527   61699 pod_ready.go:82] duration metric: took 398.907418ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:32.622538   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-proxy-nf8mp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:32.622545   61699 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.023370   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023430   61699 pod_ready.go:82] duration metric: took 400.874003ms for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.023458   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.023472   61699 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:33.422810   61699 pod_ready.go:98] node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422841   61699 pod_ready.go:82] duration metric: took 399.35051ms for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:04:33.422851   61699 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-465341" hosting pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:33.422859   61699 pod_ready.go:39] duration metric: took 1.317159668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:33.422874   61699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:04:33.434449   61699 ops.go:34] apiserver oom_adj: -16
	I0924 01:04:33.434473   61699 kubeadm.go:597] duration metric: took 11.256568213s to restartPrimaryControlPlane
	I0924 01:04:33.434481   61699 kubeadm.go:394] duration metric: took 11.307014166s to StartCluster
	I0924 01:04:33.434501   61699 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.434571   61699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:33.436172   61699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:33.436515   61699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.186 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:04:33.436732   61699 config.go:182] Loaded profile config "default-k8s-diff-port-465341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:33.436686   61699 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:04:33.436809   61699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436815   61699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436830   61699 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-465341"
	I0924 01:04:33.436832   61699 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-465341"
	I0924 01:04:33.436864   61699 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.436877   61699 addons.go:243] addon metrics-server should already be in state true
	I0924 01:04:33.436908   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	W0924 01:04:33.436842   61699 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:04:33.436935   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.436831   61699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-465341"
	I0924 01:04:33.437322   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437370   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437377   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437412   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.437458   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.437483   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.438259   61699 out.go:177] * Verifying Kubernetes components...
	I0924 01:04:33.439923   61699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:33.453108   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0924 01:04:33.453545   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I0924 01:04:33.453608   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.453916   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.454125   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454152   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454461   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.454486   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.454494   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.454806   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.455065   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455111   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.455360   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.455404   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.456716   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0924 01:04:33.457163   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.457688   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.457727   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.458031   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.458242   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.461814   61699 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-465341"
	W0924 01:04:33.461835   61699 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:04:33.461864   61699 host.go:66] Checking if "default-k8s-diff-port-465341" exists ...
	I0924 01:04:33.462230   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.462273   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.471783   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0924 01:04:33.472043   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0924 01:04:33.472300   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472550   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.472858   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.472875   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.472994   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.473003   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.473234   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473366   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.473413   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.473503   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.475140   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.475553   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.477287   61699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:04:33.477293   61699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:33.478708   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:04:33.478720   61699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:04:33.478737   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478836   61699 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.478863   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:04:33.478889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.478971   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0924 01:04:33.479636   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.480029   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.480041   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.480396   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.482306   61699 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:33.482343   61699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:33.483280   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483373   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483732   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483769   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483873   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.483892   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.483958   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484111   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484236   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484255   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.484413   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.484472   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.484738   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.484866   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.519981   61699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0924 01:04:33.520440   61699 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:33.520996   61699 main.go:141] libmachine: Using API Version  1
	I0924 01:04:33.521028   61699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:33.521497   61699 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:33.521701   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetState
	I0924 01:04:33.523331   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .DriverName
	I0924 01:04:33.523576   61699 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.523591   61699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:04:33.523625   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHHostname
	I0924 01:04:33.526668   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527211   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:1f:79", ip: ""} in network mk-default-k8s-diff-port-465341: {Iface:virbr2 ExpiryTime:2024-09-24 02:04:06 +0000 UTC Type:0 Mac:52:54:00:e4:1f:79 Iaid: IPaddr:192.168.61.186 Prefix:24 Hostname:default-k8s-diff-port-465341 Clientid:01:52:54:00:e4:1f:79}
	I0924 01:04:33.527244   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | domain default-k8s-diff-port-465341 has defined IP address 192.168.61.186 and MAC address 52:54:00:e4:1f:79 in network mk-default-k8s-diff-port-465341
	I0924 01:04:33.527471   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHPort
	I0924 01:04:33.527702   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHKeyPath
	I0924 01:04:33.527889   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .GetSSHUsername
	I0924 01:04:33.528059   61699 sshutil.go:53] new ssh client: &{IP:192.168.61.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/default-k8s-diff-port-465341/id_rsa Username:docker}
	I0924 01:04:33.645903   61699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:33.663805   61699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:33.749720   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:04:33.751631   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:04:33.751649   61699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:04:33.755330   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:04:33.812231   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:04:33.812257   61699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:04:33.847216   61699 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:33.847240   61699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:04:33.932057   61699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:04:34.781871   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026510893s)
	I0924 01:04:34.781939   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.781950   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.781887   61699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032127769s)
	I0924 01:04:34.782009   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782023   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782293   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782309   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782318   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782326   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782361   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782369   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782375   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782389   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.782404   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.782629   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782643   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.782645   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.782673   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.782683   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.790740   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.790757   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.790990   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.791010   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.791013   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.871488   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871516   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.871809   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.871826   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.871834   61699 main.go:141] libmachine: Making call to close driver server
	I0924 01:04:34.871841   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) Calling .Close
	I0924 01:04:34.872103   61699 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:04:34.872125   61699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:04:34.872117   61699 main.go:141] libmachine: (default-k8s-diff-port-465341) DBG | Closing plugin on server side
	I0924 01:04:34.872136   61699 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-465341"
	I0924 01:04:34.874133   61699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:04:30.907606   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:33.406280   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:31.337368   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:31.338025   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | unable to find current IP address of domain old-k8s-version-171598 in network mk-old-k8s-version-171598
	I0924 01:04:31.338128   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | I0924 01:04:31.338011   62886 retry.go:31] will retry after 4.137847727s: waiting for machine to come up
	I0924 01:04:35.478410   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.478991   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has current primary IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.479016   61989 main.go:141] libmachine: (old-k8s-version-171598) Found IP for machine: 192.168.83.3
	I0924 01:04:35.479029   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserving static IP address...
	I0924 01:04:35.479586   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.479607   61989 main.go:141] libmachine: (old-k8s-version-171598) Reserved static IP address: 192.168.83.3
	I0924 01:04:35.479626   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | skip adding static IP to network mk-old-k8s-version-171598 - found existing host DHCP lease matching {name: "old-k8s-version-171598", mac: "52:54:00:20:3c:a7", ip: "192.168.83.3"}
	I0924 01:04:35.479643   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Getting to WaitForSSH function...
	I0924 01:04:35.479659   61989 main.go:141] libmachine: (old-k8s-version-171598) Waiting for SSH to be available...
	I0924 01:04:35.482028   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482377   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.482419   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.482499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH client type: external
	I0924 01:04:35.482550   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa (-rw-------)
	I0924 01:04:35.482585   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:35.482600   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | About to run SSH command:
	I0924 01:04:35.482614   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | exit 0
	I0924 01:04:35.613364   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:35.613847   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetConfigRaw
	I0924 01:04:35.614543   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:35.617366   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.617742   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.617774   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.618068   61989 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/config.json ...
	I0924 01:04:35.618260   61989 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:35.618279   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:35.618489   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.621130   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621472   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.621497   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.621722   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.621914   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622091   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.622354   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.622558   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.622749   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.622760   61989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:35.736637   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:35.736661   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.736943   61989 buildroot.go:166] provisioning hostname "old-k8s-version-171598"
	I0924 01:04:35.736973   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.737151   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.739921   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740304   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.740362   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.740502   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.740678   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740851   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.740994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.741218   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.741409   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.741423   61989 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-171598 && echo "old-k8s-version-171598" | sudo tee /etc/hostname
	I0924 01:04:35.866963   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-171598
	
	I0924 01:04:35.866994   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:35.870342   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.870860   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:35.870893   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:35.871145   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:35.871406   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871638   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:35.871850   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:35.872050   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:35.872253   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:35.872276   61989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-171598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-171598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-171598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:36.717274   61070 start.go:364] duration metric: took 55.446152288s to acquireMachinesLock for "no-preload-674057"
	I0924 01:04:36.717335   61070 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:04:36.717344   61070 fix.go:54] fixHost starting: 
	I0924 01:04:36.717781   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:04:36.717821   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:04:36.739062   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0924 01:04:36.739602   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:04:36.740307   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:04:36.740366   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:04:36.740767   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:04:36.741058   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:36.741223   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:04:36.743313   61070 fix.go:112] recreateIfNeeded on no-preload-674057: state=Stopped err=<nil>
	I0924 01:04:36.743339   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	W0924 01:04:36.743512   61070 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:04:36.745694   61070 out.go:177] * Restarting existing kvm2 VM for "no-preload-674057" ...
	I0924 01:04:35.998933   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:35.998962   61989 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:35.998983   61989 buildroot.go:174] setting up certificates
	I0924 01:04:35.998994   61989 provision.go:84] configureAuth start
	I0924 01:04:35.999005   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetMachineName
	I0924 01:04:35.999359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.002499   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003027   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.003052   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.003167   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.005508   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005773   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.005796   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.005909   61989 provision.go:143] copyHostCerts
	I0924 01:04:36.005967   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:36.005986   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:36.006037   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:36.006129   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:36.006137   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:36.006156   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:36.006209   61989 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:36.006216   61989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:36.006237   61989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:36.006310   61989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-171598 san=[127.0.0.1 192.168.83.3 localhost minikube old-k8s-version-171598]
	I0924 01:04:36.084609   61989 provision.go:177] copyRemoteCerts
	I0924 01:04:36.084671   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:36.084698   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.087740   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088046   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.088075   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.088278   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.088523   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.088716   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.088854   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.178597   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:36.202768   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:04:36.225933   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:36.250014   61989 provision.go:87] duration metric: took 251.005829ms to configureAuth
	I0924 01:04:36.250046   61989 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:36.250369   61989 config.go:182] Loaded profile config "old-k8s-version-171598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 01:04:36.250453   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.253290   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.253912   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.253943   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.254242   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.254474   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254650   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.254764   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.254958   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.255124   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.255138   61989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:36.472324   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:36.472381   61989 machine.go:96] duration metric: took 854.106776ms to provisionDockerMachine
	I0924 01:04:36.472401   61989 start.go:293] postStartSetup for "old-k8s-version-171598" (driver="kvm2")
	I0924 01:04:36.472419   61989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:36.472451   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.472814   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:36.472849   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.475567   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.475941   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.475969   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.476125   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.476403   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.476614   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.476831   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.562688   61989 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:36.566476   61989 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:36.566501   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:36.566561   61989 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:36.566635   61989 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:36.566724   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:36.576132   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:36.599696   61989 start.go:296] duration metric: took 127.276787ms for postStartSetup
	I0924 01:04:36.599738   61989 fix.go:56] duration metric: took 20.366477202s for fixHost
	I0924 01:04:36.599763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.603462   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.603836   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.603867   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.604057   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.604500   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604721   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.604878   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.605041   61989 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:36.605285   61989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.3 22 <nil> <nil>}
	I0924 01:04:36.605303   61989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:36.717061   61989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139876.688490589
	
	I0924 01:04:36.717091   61989 fix.go:216] guest clock: 1727139876.688490589
	I0924 01:04:36.717102   61989 fix.go:229] Guest: 2024-09-24 01:04:36.688490589 +0000 UTC Remote: 2024-09-24 01:04:36.599742488 +0000 UTC m=+235.652611441 (delta=88.748101ms)
	I0924 01:04:36.717157   61989 fix.go:200] guest clock delta is within tolerance: 88.748101ms
	I0924 01:04:36.717165   61989 start.go:83] releasing machines lock for "old-k8s-version-171598", held for 20.483937438s
	I0924 01:04:36.717199   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.717499   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:36.720466   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.720959   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.720986   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.721189   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721763   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.721965   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .DriverName
	I0924 01:04:36.722073   61989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:36.722118   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.722187   61989 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:36.722215   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHHostname
	I0924 01:04:36.725171   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725384   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725669   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.725694   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.725858   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.725970   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:36.726016   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:36.726065   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726249   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHPort
	I0924 01:04:36.726254   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.726494   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHKeyPath
	I0924 01:04:36.726513   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.726657   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetSSHUsername
	I0924 01:04:36.727049   61989 sshutil.go:53] new ssh client: &{IP:192.168.83.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/old-k8s-version-171598/id_rsa Username:docker}
	I0924 01:04:36.845385   61989 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:36.853307   61989 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:37.001850   61989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:37.009873   61989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:37.009948   61989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:37.032269   61989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:37.032299   61989 start.go:495] detecting cgroup driver to use...
	I0924 01:04:37.032403   61989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:37.056250   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:37.072827   61989 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:37.072903   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:37.090639   61989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:37.107525   61989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:37.235495   61989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:37.410971   61989 docker.go:233] disabling docker service ...
	I0924 01:04:37.411034   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:37.427815   61989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:37.444121   61989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:37.568933   61989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:37.700008   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:37.715529   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:37.736908   61989 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0924 01:04:37.736980   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.748540   61989 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:37.748590   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.759301   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.771008   61989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:37.782080   61989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:37.793756   61989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:37.803444   61989 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:37.803525   61989 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:37.818012   61989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:37.829019   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:37.978885   61989 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:38.086263   61989 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:38.086353   61989 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:38.093479   61989 start.go:563] Will wait 60s for crictl version
	I0924 01:04:38.093573   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:38.097486   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:38.138781   61989 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:38.138872   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.166832   61989 ssh_runner.go:195] Run: crio --version
	I0924 01:04:38.199764   61989 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0924 01:04:36.747491   61070 main.go:141] libmachine: (no-preload-674057) Calling .Start
	I0924 01:04:36.747705   61070 main.go:141] libmachine: (no-preload-674057) Ensuring networks are active...
	I0924 01:04:36.748694   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network default is active
	I0924 01:04:36.749079   61070 main.go:141] libmachine: (no-preload-674057) Ensuring network mk-no-preload-674057 is active
	I0924 01:04:36.749656   61070 main.go:141] libmachine: (no-preload-674057) Getting domain xml...
	I0924 01:04:36.750535   61070 main.go:141] libmachine: (no-preload-674057) Creating domain...
	I0924 01:04:38.122450   61070 main.go:141] libmachine: (no-preload-674057) Waiting to get IP...
	I0924 01:04:38.123578   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.124107   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.124173   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.124079   63121 retry.go:31] will retry after 227.552582ms: waiting for machine to come up
	I0924 01:04:38.353724   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.354145   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.354169   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.354102   63121 retry.go:31] will retry after 322.483933ms: waiting for machine to come up
	I0924 01:04:38.678600   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.679091   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.679120   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.679041   63121 retry.go:31] will retry after 301.71366ms: waiting for machine to come up
	I0924 01:04:34.875511   61699 addons.go:510] duration metric: took 1.43884954s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:04:35.671396   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:38.169131   61699 node_ready.go:53] node "default-k8s-diff-port-465341" has status "Ready":"False"
	I0924 01:04:35.907681   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.408396   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:38.201359   61989 main.go:141] libmachine: (old-k8s-version-171598) Calling .GetIP
	I0924 01:04:38.204699   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205122   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:3c:a7", ip: ""} in network mk-old-k8s-version-171598: {Iface:virbr3 ExpiryTime:2024-09-24 02:04:27 +0000 UTC Type:0 Mac:52:54:00:20:3c:a7 Iaid: IPaddr:192.168.83.3 Prefix:24 Hostname:old-k8s-version-171598 Clientid:01:52:54:00:20:3c:a7}
	I0924 01:04:38.205152   61989 main.go:141] libmachine: (old-k8s-version-171598) DBG | domain old-k8s-version-171598 has defined IP address 192.168.83.3 and MAC address 52:54:00:20:3c:a7 in network mk-old-k8s-version-171598
	I0924 01:04:38.205408   61989 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:38.209456   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:38.222128   61989 kubeadm.go:883] updating cluster {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:38.222254   61989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0924 01:04:38.222300   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:38.276802   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:38.276864   61989 ssh_runner.go:195] Run: which lz4
	I0924 01:04:38.280989   61989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 01:04:38.285108   61989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 01:04:38.285138   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0924 01:04:39.903777   61989 crio.go:462] duration metric: took 1.62282331s to copy over tarball
	I0924 01:04:39.903900   61989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 01:04:38.982586   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:38.983239   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:38.983283   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:38.983219   63121 retry.go:31] will retry after 402.217062ms: waiting for machine to come up
	I0924 01:04:39.386903   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:39.387550   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:39.387578   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:39.387483   63121 retry.go:31] will retry after 734.565994ms: waiting for machine to come up
	I0924 01:04:40.123444   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.123910   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.123940   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.123870   63121 retry.go:31] will retry after 704.281941ms: waiting for machine to come up
	I0924 01:04:40.829666   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:40.830217   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:40.830275   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:40.830209   63121 retry.go:31] will retry after 1.068502434s: waiting for machine to come up
	I0924 01:04:41.900192   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:41.900739   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:41.900765   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:41.900691   63121 retry.go:31] will retry after 1.087234201s: waiting for machine to come up
	I0924 01:04:42.989622   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:42.990089   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:42.990117   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:42.990036   63121 retry.go:31] will retry after 1.269273138s: waiting for machine to come up
	I0924 01:04:39.168613   61699 node_ready.go:49] node "default-k8s-diff-port-465341" has status "Ready":"True"
	I0924 01:04:39.168638   61699 node_ready.go:38] duration metric: took 5.504799687s for node "default-k8s-diff-port-465341" to be "Ready" ...
	I0924 01:04:39.168650   61699 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:04:39.175830   61699 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182016   61699 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.182040   61699 pod_ready.go:82] duration metric: took 6.182193ms for pod "coredns-7c65d6cfc9-xxdh2" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.182052   61699 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188162   61699 pod_ready.go:93] pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.188191   61699 pod_ready.go:82] duration metric: took 6.130794ms for pod "etcd-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.188201   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196197   61699 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:39.196225   61699 pod_ready.go:82] duration metric: took 8.016123ms for pod "kube-apiserver-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:39.196238   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703747   61699 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.703776   61699 pod_ready.go:82] duration metric: took 1.507528182s for pod "kube-controller-manager-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.703791   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771262   61699 pod_ready.go:93] pod "kube-proxy-nf8mp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:40.771293   61699 pod_ready.go:82] duration metric: took 67.494606ms for pod "kube-proxy-nf8mp" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:40.771307   61699 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:42.778933   61699 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:40.908876   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:43.409650   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:42.944929   61989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.040984911s)
	I0924 01:04:42.944969   61989 crio.go:469] duration metric: took 3.041152253s to extract the tarball
	I0924 01:04:42.944981   61989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 01:04:42.988315   61989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:43.036011   61989 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0924 01:04:43.036045   61989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:43.036151   61989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.036194   61989 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0924 01:04:43.036211   61989 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.036281   61989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.036301   61989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.036344   61989 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.036310   61989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.036577   61989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038440   61989 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.038458   61989 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0924 01:04:43.038482   61989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.038502   61989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.038554   61989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.038588   61989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.038600   61989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.038816   61989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:43.306768   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.309660   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0924 01:04:43.312684   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.314551   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.317719   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.326063   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.378736   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.405508   61989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0924 01:04:43.405585   61989 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.405648   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.452908   61989 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0924 01:04:43.452954   61989 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0924 01:04:43.453006   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471293   61989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0924 01:04:43.471341   61989 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0924 01:04:43.471347   61989 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.471370   61989 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.471297   61989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0924 01:04:43.471406   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471421   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.471423   61989 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.471462   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.494687   61989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0924 01:04:43.494735   61989 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.494782   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508206   61989 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0924 01:04:43.508253   61989 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.508278   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.508298   61989 ssh_runner.go:195] Run: which crictl
	I0924 01:04:43.508363   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.508419   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.508451   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.508487   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.508547   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.645995   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.646039   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.646098   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.646152   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.646261   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.646337   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.646413   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817326   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.817416   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0924 01:04:43.817381   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0924 01:04:43.817508   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0924 01:04:43.817449   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0924 01:04:43.817597   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0924 01:04:43.817686   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0924 01:04:43.972782   61989 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0924 01:04:43.972792   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0924 01:04:43.972869   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0924 01:04:43.972838   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0924 01:04:43.972928   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0924 01:04:43.972944   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0924 01:04:43.973027   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0924 01:04:44.008191   61989 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0924 01:04:44.220628   61989 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:44.364297   61989 cache_images.go:92] duration metric: took 1.328227964s to LoadCachedImages
	W0924 01:04:44.364505   61989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0924 01:04:44.364539   61989 kubeadm.go:934] updating node { 192.168.83.3 8443 v1.20.0 crio true true} ...
	I0924 01:04:44.364681   61989 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-171598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:04:44.364824   61989 ssh_runner.go:195] Run: crio config
	I0924 01:04:44.423360   61989 cni.go:84] Creating CNI manager for ""
	I0924 01:04:44.423382   61989 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:04:44.423393   61989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:04:44.423412   61989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-171598 NodeName:old-k8s-version-171598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:04:44.423593   61989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-171598"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:04:44.423671   61989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:04:44.434069   61989 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:04:44.434143   61989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:04:44.443807   61989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0924 01:04:44.463473   61989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:04:44.480449   61989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0924 01:04:44.498520   61989 ssh_runner.go:195] Run: grep 192.168.83.3	control-plane.minikube.internal$ /etc/hosts
	I0924 01:04:44.503034   61989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:44.516699   61989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:44.643090   61989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:04:44.660194   61989 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598 for IP: 192.168.83.3
	I0924 01:04:44.660216   61989 certs.go:194] generating shared ca certs ...
	I0924 01:04:44.660234   61989 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:44.660454   61989 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:04:44.660542   61989 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:04:44.660559   61989 certs.go:256] generating profile certs ...
	I0924 01:04:44.660682   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.key
	I0924 01:04:44.660755   61989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key.577554d3
	I0924 01:04:44.660816   61989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key
	I0924 01:04:44.660976   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:04:44.661014   61989 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:04:44.661026   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:04:44.661071   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:04:44.661104   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:04:44.661133   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:04:44.661211   61989 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:44.662130   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:04:44.710279   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:04:44.736824   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:04:44.773120   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:04:44.801137   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:04:44.844946   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:04:44.880871   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:04:44.908630   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:04:44.947148   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:04:44.971925   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:04:45.000519   61989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:04:45.034167   61989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:04:45.054932   61989 ssh_runner.go:195] Run: openssl version
	I0924 01:04:45.062733   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:04:45.076993   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082104   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.082175   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:04:45.088219   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:04:45.099211   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:04:45.111178   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116551   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.116624   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:04:45.122353   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:04:45.133490   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:04:45.144123   61989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150437   61989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.150498   61989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:04:45.157127   61989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:04:45.168217   61989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:04:45.172865   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:04:45.179177   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:04:45.184987   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:04:45.190927   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:04:45.197134   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:04:45.203170   61989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:04:45.209550   61989 kubeadm.go:392] StartCluster: {Name:old-k8s-version-171598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-171598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:04:45.209721   61989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:04:45.209778   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.247564   61989 cri.go:89] found id: ""
	I0924 01:04:45.247635   61989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:04:45.258171   61989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:04:45.258195   61989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:04:45.258269   61989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:04:45.268247   61989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:04:45.269656   61989 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-171598" does not appear in /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:04:45.270486   61989 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-7623/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-171598" cluster setting kubeconfig missing "old-k8s-version-171598" context setting]
	I0924 01:04:45.271918   61989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:04:45.277260   61989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:04:45.287239   61989 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.3
	I0924 01:04:45.287271   61989 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:04:45.287281   61989 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:04:45.287325   61989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:04:45.327991   61989 cri.go:89] found id: ""
	I0924 01:04:45.328071   61989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:04:45.344693   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:04:45.354414   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:04:45.354439   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:04:45.354499   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:04:45.363765   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:04:45.363838   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:04:45.373569   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:04:45.382401   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:04:45.382464   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:04:45.392710   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.402855   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:04:45.402919   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:04:45.413651   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:04:45.423818   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:04:45.423873   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:04:45.434138   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:04:45.444119   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:45.582409   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:44.261681   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:44.262330   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:44.262360   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:44.262274   63121 retry.go:31] will retry after 1.755704993s: waiting for machine to come up
	I0924 01:04:46.019761   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:46.020213   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:46.020242   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:46.020155   63121 retry.go:31] will retry after 2.038509067s: waiting for machine to come up
	I0924 01:04:48.060649   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:48.061170   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:48.061201   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:48.061122   63121 retry.go:31] will retry after 2.834284151s: waiting for machine to come up
	I0924 01:04:45.021172   61699 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace has status "Ready":"True"
	I0924 01:04:45.021200   61699 pod_ready.go:82] duration metric: took 4.249884358s for pod "kube-scheduler-default-k8s-diff-port-465341" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:45.021213   61699 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	I0924 01:04:47.028860   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:45.908530   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:48.407714   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:46.245754   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.511218   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.608877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:04:46.722521   61989 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:04:46.722607   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.222945   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:47.723437   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.223704   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:48.723517   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.223744   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:49.722691   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.222927   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.723331   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:50.897541   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:50.898047   61070 main.go:141] libmachine: (no-preload-674057) DBG | unable to find current IP address of domain no-preload-674057 in network mk-no-preload-674057
	I0924 01:04:50.898093   61070 main.go:141] libmachine: (no-preload-674057) DBG | I0924 01:04:50.898018   63121 retry.go:31] will retry after 4.166792416s: waiting for machine to come up
	I0924 01:04:49.530215   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.027812   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:50.907425   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:52.907568   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:54.908623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:51.223525   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:51.722715   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.223281   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:52.723378   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.222798   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:53.722883   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.223279   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:54.723155   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.222994   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.723628   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:55.068642   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069305   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has current primary IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.069330   61070 main.go:141] libmachine: (no-preload-674057) Found IP for machine: 192.168.50.161
	I0924 01:04:55.069339   61070 main.go:141] libmachine: (no-preload-674057) Reserving static IP address...
	I0924 01:04:55.070035   61070 main.go:141] libmachine: (no-preload-674057) Reserved static IP address: 192.168.50.161
	I0924 01:04:55.070065   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.070073   61070 main.go:141] libmachine: (no-preload-674057) Waiting for SSH to be available...
	I0924 01:04:55.070090   61070 main.go:141] libmachine: (no-preload-674057) DBG | skip adding static IP to network mk-no-preload-674057 - found existing host DHCP lease matching {name: "no-preload-674057", mac: "52:54:00:01:7a:1a", ip: "192.168.50.161"}
	I0924 01:04:55.070095   61070 main.go:141] libmachine: (no-preload-674057) DBG | Getting to WaitForSSH function...
	I0924 01:04:55.072715   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073106   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.073140   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.073351   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH client type: external
	I0924 01:04:55.073379   61070 main.go:141] libmachine: (no-preload-674057) DBG | Using SSH private key: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa (-rw-------)
	I0924 01:04:55.073405   61070 main.go:141] libmachine: (no-preload-674057) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0924 01:04:55.073444   61070 main.go:141] libmachine: (no-preload-674057) DBG | About to run SSH command:
	I0924 01:04:55.073462   61070 main.go:141] libmachine: (no-preload-674057) DBG | exit 0
	I0924 01:04:55.200585   61070 main.go:141] libmachine: (no-preload-674057) DBG | SSH cmd err, output: <nil>: 
	I0924 01:04:55.200980   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetConfigRaw
	I0924 01:04:55.201650   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.204919   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205340   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.205360   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.205638   61070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/config.json ...
	I0924 01:04:55.205881   61070 machine.go:93] provisionDockerMachine start ...
	I0924 01:04:55.205903   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:55.206124   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.208572   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209012   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.209037   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.209218   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.209499   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209693   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.209832   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.210010   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.210249   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.210263   61070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:04:55.317027   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 01:04:55.317067   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317403   61070 buildroot.go:166] provisioning hostname "no-preload-674057"
	I0924 01:04:55.317441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.317700   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.320886   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321301   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.321330   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.321443   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.321643   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.321853   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.322010   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.322169   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.322343   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.322360   61070 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-674057 && echo "no-preload-674057" | sudo tee /etc/hostname
	I0924 01:04:55.439098   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-674057
	
	I0924 01:04:55.439134   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.441909   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442212   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.442256   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.442430   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.442667   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.442890   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.443078   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.443301   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.443460   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.443474   61070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-674057' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-674057/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-674057' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:04:55.558172   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:04:55.558204   61070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7623/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7623/.minikube}
	I0924 01:04:55.558225   61070 buildroot.go:174] setting up certificates
	I0924 01:04:55.558236   61070 provision.go:84] configureAuth start
	I0924 01:04:55.558248   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetMachineName
	I0924 01:04:55.558574   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:55.561503   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.561891   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.561917   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.562089   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.564426   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564800   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.564825   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.564958   61070 provision.go:143] copyHostCerts
	I0924 01:04:55.565009   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem, removing ...
	I0924 01:04:55.565018   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem
	I0924 01:04:55.565074   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/ca.pem (1082 bytes)
	I0924 01:04:55.565167   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem, removing ...
	I0924 01:04:55.565175   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem
	I0924 01:04:55.565194   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/cert.pem (1123 bytes)
	I0924 01:04:55.565253   61070 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem, removing ...
	I0924 01:04:55.565263   61070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem
	I0924 01:04:55.565285   61070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7623/.minikube/key.pem (1679 bytes)
	I0924 01:04:55.565372   61070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem org=jenkins.no-preload-674057 san=[127.0.0.1 192.168.50.161 localhost minikube no-preload-674057]
	I0924 01:04:55.649690   61070 provision.go:177] copyRemoteCerts
	I0924 01:04:55.649750   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:04:55.649774   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.652790   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653249   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.653278   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.653567   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.653772   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.653936   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.654059   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:55.738522   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0924 01:04:55.764045   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:04:55.788225   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0924 01:04:55.811207   61070 provision.go:87] duration metric: took 252.958643ms to configureAuth
	I0924 01:04:55.811233   61070 buildroot.go:189] setting minikube options for container-runtime
	I0924 01:04:55.811415   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:04:55.811503   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:55.814921   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815366   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:55.815400   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:55.815597   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:55.815826   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816039   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:55.816212   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:55.816496   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:55.816740   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:55.816756   61070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0924 01:04:56.045600   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0924 01:04:56.045632   61070 machine.go:96] duration metric: took 839.736907ms to provisionDockerMachine
	I0924 01:04:56.045646   61070 start.go:293] postStartSetup for "no-preload-674057" (driver="kvm2")
	I0924 01:04:56.045660   61070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:04:56.045679   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.045997   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:04:56.046027   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.049081   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049522   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.049559   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.049743   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.049960   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.050105   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.050245   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.136652   61070 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:04:56.140894   61070 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 01:04:56.140920   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/addons for local assets ...
	I0924 01:04:56.140987   61070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7623/.minikube/files for local assets ...
	I0924 01:04:56.141071   61070 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem -> 147932.pem in /etc/ssl/certs
	I0924 01:04:56.141161   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:04:56.151170   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:04:56.179268   61070 start.go:296] duration metric: took 133.605527ms for postStartSetup
	I0924 01:04:56.179318   61070 fix.go:56] duration metric: took 19.461975001s for fixHost
	I0924 01:04:56.179344   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.182567   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.182902   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.182927   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.183091   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.183320   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183562   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.183720   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.183865   61070 main.go:141] libmachine: Using SSH client type: native
	I0924 01:04:56.184036   61070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.161 22 <nil> <nil>}
	I0924 01:04:56.184045   61070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 01:04:56.289079   61070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727139896.261476318
	
	I0924 01:04:56.289113   61070 fix.go:216] guest clock: 1727139896.261476318
	I0924 01:04:56.289121   61070 fix.go:229] Guest: 2024-09-24 01:04:56.261476318 +0000 UTC Remote: 2024-09-24 01:04:56.17932382 +0000 UTC m=+357.500342999 (delta=82.152498ms)
	I0924 01:04:56.289141   61070 fix.go:200] guest clock delta is within tolerance: 82.152498ms
	I0924 01:04:56.289156   61070 start.go:83] releasing machines lock for "no-preload-674057", held for 19.57184993s
	I0924 01:04:56.289175   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.289441   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:56.292799   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293122   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.293148   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.293327   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293832   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.293990   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:04:56.294073   61070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:04:56.294108   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.294271   61070 ssh_runner.go:195] Run: cat /version.json
	I0924 01:04:56.294299   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:04:56.296962   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297113   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297300   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297325   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297473   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:56.297504   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:56.297526   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297665   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:04:56.297737   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297858   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:04:56.297926   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.297968   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:04:56.298044   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.298139   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:04:56.373014   61070 ssh_runner.go:195] Run: systemctl --version
	I0924 01:04:56.412487   61070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0924 01:04:56.558755   61070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 01:04:56.565187   61070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 01:04:56.565245   61070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:04:56.582073   61070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 01:04:56.582102   61070 start.go:495] detecting cgroup driver to use...
	I0924 01:04:56.582167   61070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 01:04:56.597553   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 01:04:56.612515   61070 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:04:56.612564   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:04:56.627596   61070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:04:56.641619   61070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:04:56.762636   61070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:04:56.917742   61070 docker.go:233] disabling docker service ...
	I0924 01:04:56.917821   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:04:56.934585   61070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:04:56.949194   61070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:04:57.085465   61070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:04:57.230529   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:04:57.245369   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:04:57.265137   61070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0924 01:04:57.265196   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.276878   61070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0924 01:04:57.276936   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.288934   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.300690   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.312392   61070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:04:57.324491   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.335619   61070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.352868   61070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0924 01:04:57.363280   61070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:04:57.372811   61070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 01:04:57.372866   61070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 01:04:57.385797   61070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:04:57.395936   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:04:57.532086   61070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0924 01:04:57.628275   61070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0924 01:04:57.628370   61070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0924 01:04:57.633679   61070 start.go:563] Will wait 60s for crictl version
	I0924 01:04:57.633761   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:57.637574   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:04:57.679667   61070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0924 01:04:57.679756   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.707710   61070 ssh_runner.go:195] Run: crio --version
	I0924 01:04:57.738651   61070 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0924 01:04:57.740120   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetIP
	I0924 01:04:57.743379   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.743783   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:04:57.743814   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:04:57.744048   61070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0924 01:04:57.748516   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:04:57.762723   61070 kubeadm.go:883] updating cluster {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:04:57.762864   61070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0924 01:04:57.762906   61070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:04:57.798232   61070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0924 01:04:57.798260   61070 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 01:04:57.798334   61070 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.798357   61070 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.798377   61070 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:57.798340   61070 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.798397   61070 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.798381   61070 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 01:04:57.798491   61070 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:57.799819   61070 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:57.799826   61070 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:57.799811   61070 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:57.799840   61070 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:57.799893   61070 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 01:04:57.799902   61070 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:57.799903   61070 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.027261   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.028437   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0924 01:04:58.051940   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.082860   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.088073   61070 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0924 01:04:58.088121   61070 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.088190   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.095081   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.098388   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.152389   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.190893   61070 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0924 01:04:58.190920   61070 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0924 01:04:58.190934   61070 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.190944   61070 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.190984   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191029   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.190988   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191080   61070 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0924 01:04:58.191109   61070 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.191134   61070 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0924 01:04:58.191144   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.191157   61070 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.191185   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219642   61070 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0924 01:04:58.219689   61070 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.219703   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.219729   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:04:58.219741   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.219745   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.250341   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.250394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.320188   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.320222   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.320308   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.320394   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.383126   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.383327   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0924 01:04:58.453833   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 01:04:58.453918   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0924 01:04:58.453878   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.453923   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0924 01:04:58.499994   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0924 01:04:58.500027   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 01:04:58.500119   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.583372   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 01:04:58.583491   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:04:58.586213   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0924 01:04:58.586281   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0924 01:04:58.586325   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:04:58.586328   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 01:04:58.586405   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:04:58.616022   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 01:04:58.616061   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0924 01:04:58.616082   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.616118   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0924 01:04:58.616131   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:04:58.616180   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0924 01:04:58.616128   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0924 01:04:58.647507   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 01:04:58.647576   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0924 01:04:58.647620   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0924 01:04:58.647659   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:04:54.527399   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.028355   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:57.407381   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:59.908596   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:04:56.222908   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:56.722701   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.222762   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:57.722814   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.222671   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:58.722746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.222961   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.723335   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.223393   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:00.722739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:04:59.003431   61070 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815541   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.199297236s)
	I0924 01:05:00.815566   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.167859705s)
	I0924 01:05:00.815579   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0924 01:05:00.815599   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0924 01:05:00.815619   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815625   61070 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812143064s)
	I0924 01:05:00.815674   61070 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0924 01:05:00.815687   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0924 01:05:00.815710   61070 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:00.815750   61070 ssh_runner.go:195] Run: which crictl
	I0924 01:05:02.782328   61070 ssh_runner.go:235] Completed: which crictl: (1.966554191s)
	I0924 01:05:02.782392   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.966688239s)
	I0924 01:05:02.782421   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0924 01:05:02.782445   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782497   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0924 01:05:02.782404   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:04:59.529167   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.531324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.028305   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:02.407051   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:04.475255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:01.222765   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:01.722729   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.223407   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:02.722799   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.223381   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:03.723427   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.223157   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.723069   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.223400   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:05.723739   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:04.773493   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.990910382s)
	I0924 01:05:04.773540   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.99101415s)
	I0924 01:05:04.773560   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0924 01:05:04.773577   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:04.773584   61070 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:04.773615   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0924 01:05:08.061466   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.287832238s)
	I0924 01:05:08.061499   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0924 01:05:08.061510   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.287911454s)
	I0924 01:05:08.061595   61070 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:05:08.061520   61070 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:08.061690   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0924 01:05:06.029255   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.527617   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.907268   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:08.907464   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:06.223395   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:06.723345   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.222965   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:07.722795   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.222933   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:08.723687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.223526   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:09.723684   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.223275   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.723534   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:10.041517   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.979809714s)
	I0924 01:05:10.041549   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0924 01:05:10.041577   61070 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.979956931s)
	I0924 01:05:10.041625   61070 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 01:05:10.041582   61070 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041714   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0924 01:05:10.041727   61070 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005649   61070 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.963906504s)
	I0924 01:05:12.005689   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0924 01:05:12.005696   61070 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.963951454s)
	I0924 01:05:12.005720   61070 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0924 01:05:12.005727   61070 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.005768   61070 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0924 01:05:12.960728   61070 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19696-7623/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 01:05:12.960771   61070 cache_images.go:123] Successfully loaded all cached images
	I0924 01:05:12.960778   61070 cache_images.go:92] duration metric: took 15.162496206s to LoadCachedImages
	I0924 01:05:12.960791   61070 kubeadm.go:934] updating node { 192.168.50.161 8443 v1.31.1 crio true true} ...
	I0924 01:05:12.960931   61070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-674057 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:05:12.961013   61070 ssh_runner.go:195] Run: crio config
	I0924 01:05:13.006511   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:13.006535   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:13.006551   61070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:05:13.006579   61070 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.161 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-674057 NodeName:no-preload-674057 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:05:13.006729   61070 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-674057"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:05:13.006799   61070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:05:13.017598   61070 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:05:13.017672   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:05:13.027414   61070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 01:05:13.044688   61070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:05:13.061646   61070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0924 01:05:13.079552   61070 ssh_runner.go:195] Run: grep 192.168.50.161	control-plane.minikube.internal$ /etc/hosts
	I0924 01:05:13.083172   61070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:05:13.095232   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:05:13.207184   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:05:13.222851   61070 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057 for IP: 192.168.50.161
	I0924 01:05:13.222880   61070 certs.go:194] generating shared ca certs ...
	I0924 01:05:13.222901   61070 certs.go:226] acquiring lock for ca certs: {Name:mk91de42c259e96c17f08dd58c70c00821bd0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:05:13.223084   61070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key
	I0924 01:05:13.223184   61070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key
	I0924 01:05:13.223195   61070 certs.go:256] generating profile certs ...
	I0924 01:05:13.223314   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.key
	I0924 01:05:13.223394   61070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key.8fa8fb95
	I0924 01:05:13.223445   61070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key
	I0924 01:05:13.223614   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem (1338 bytes)
	W0924 01:05:13.223654   61070 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793_empty.pem, impossibly tiny 0 bytes
	I0924 01:05:13.223710   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:05:13.223756   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/ca.pem (1082 bytes)
	I0924 01:05:13.223785   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:05:13.223818   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/certs/key.pem (1679 bytes)
	I0924 01:05:13.223862   61070 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem (1708 bytes)
	I0924 01:05:13.224549   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:05:13.273224   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:05:13.311069   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:05:13.342314   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 01:05:13.369345   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:05:13.395466   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 01:05:13.424307   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:05:13.448531   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 01:05:13.472491   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/ssl/certs/147932.pem --> /usr/share/ca-certificates/147932.pem (1708 bytes)
	I0924 01:05:13.496060   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:05:13.521182   61070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7623/.minikube/certs/14793.pem --> /usr/share/ca-certificates/14793.pem (1338 bytes)
	I0924 01:05:13.548194   61070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:05:13.566423   61070 ssh_runner.go:195] Run: openssl version
	I0924 01:05:13.572605   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:05:13.583991   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588705   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.588771   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:05:13.594828   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:05:13.606168   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14793.pem && ln -fs /usr/share/ca-certificates/14793.pem /etc/ssl/certs/14793.pem"
	I0924 01:05:13.617723   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622697   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:55 /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.622762   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14793.pem
	I0924 01:05:13.628486   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14793.pem /etc/ssl/certs/51391683.0"
	I0924 01:05:13.639176   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147932.pem && ln -fs /usr/share/ca-certificates/147932.pem /etc/ssl/certs/147932.pem"
	I0924 01:05:13.650161   61070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654546   61070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:55 /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.654625   61070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147932.pem
	I0924 01:05:13.660382   61070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147932.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:05:13.671487   61070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:05:13.676226   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:05:13.682591   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:05:13.688492   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:05:13.694726   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:05:13.700432   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:05:13.706080   61070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:05:13.712226   61070 kubeadm.go:392] StartCluster: {Name:no-preload-674057 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-674057 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:05:13.712323   61070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0924 01:05:13.712421   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:11.028779   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.527996   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:10.908227   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:13.408515   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:11.223272   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:11.723442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.223301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:12.723151   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.223174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.722780   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.222777   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:14.722987   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.223654   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:15.723449   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:13.757518   61070 cri.go:89] found id: ""
	I0924 01:05:13.757597   61070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:05:13.768318   61070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:05:13.768367   61070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:05:13.768416   61070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:05:13.778918   61070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:05:13.780385   61070 kubeconfig.go:125] found "no-preload-674057" server: "https://192.168.50.161:8443"
	I0924 01:05:13.783392   61070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:05:13.794016   61070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.161
	I0924 01:05:13.794050   61070 kubeadm.go:1160] stopping kube-system containers ...
	I0924 01:05:13.794085   61070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0924 01:05:13.794150   61070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:05:13.833511   61070 cri.go:89] found id: ""
	I0924 01:05:13.833596   61070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 01:05:13.851608   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:05:13.861469   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:05:13.861510   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:05:13.861552   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:05:13.870700   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:05:13.870770   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:05:13.880613   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:05:13.890336   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:05:13.890404   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:05:13.900172   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.910408   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:05:13.910475   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:05:13.919980   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:05:13.929398   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:05:13.929495   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:05:13.938894   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:05:13.948749   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:14.056463   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.345268   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.288763261s)
	I0924 01:05:15.345317   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.555986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.626986   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:15.697665   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:05:15.697761   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.198410   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.698860   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.715727   61070 api_server.go:72] duration metric: took 1.018058771s to wait for apiserver process to appear ...
	I0924 01:05:16.715756   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:05:16.715779   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:15.528157   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.528680   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:15.906930   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:17.907223   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:16.223623   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:16.723625   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.223541   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:17.722702   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.222919   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:18.722982   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.222978   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:19.723547   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.223112   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:20.723562   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.716809   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:21.716852   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:19.528769   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.028695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:20.406693   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:22.407036   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:24.906735   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:21.223058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:21.722680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.223693   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:22.722716   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.223387   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:23.722910   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.223608   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:24.723144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.223442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:25.723025   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.717768   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:26.717811   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:24.527568   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.527806   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.028455   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:27.406994   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:29.906590   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:26.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:26.723271   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.223163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:27.723283   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.222782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:28.723174   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.222803   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:29.723029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.223679   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:30.723058   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.718277   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:31.718317   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:31.028690   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:33.527675   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.906723   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:34.406306   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:31.223465   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:31.723438   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.223673   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:32.722674   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.223289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:33.723651   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.223014   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:34.723518   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.222860   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:35.723642   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.718676   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:36.718716   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.146737   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": read tcp 192.168.50.1:59880->192.168.50.161:8443: read: connection reset by peer
	I0924 01:05:37.215865   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.216506   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:37.716052   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:37.716731   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": dial tcp 192.168.50.161:8443: connect: connection refused
	I0924 01:05:38.216296   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:36.028537   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.032544   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.406928   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:38.407201   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:36.222680   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:36.723015   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.222736   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:37.723185   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.223070   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:38.723237   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.223640   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:39.723622   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.222705   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:40.722909   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.217518   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:43.217557   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:40.527577   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:43.027715   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:40.906522   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:42.906906   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:44.907623   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:41.223105   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:41.723166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.223286   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:42.723048   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.223278   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:43.723301   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.222712   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:44.723191   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.223720   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:45.723044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:48.217915   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:48.217982   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:45.028780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.028883   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:47.406680   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:49.907776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:46.223270   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:46.722902   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:46.722980   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:46.781519   61989 cri.go:89] found id: ""
	I0924 01:05:46.781551   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.781565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:46.781574   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:46.781630   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:46.815990   61989 cri.go:89] found id: ""
	I0924 01:05:46.816021   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.816030   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:46.816035   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:46.816082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:46.848951   61989 cri.go:89] found id: ""
	I0924 01:05:46.848980   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.848989   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:46.848995   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:46.849062   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:46.880731   61989 cri.go:89] found id: ""
	I0924 01:05:46.880756   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.880764   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:46.880770   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:46.880832   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:46.915975   61989 cri.go:89] found id: ""
	I0924 01:05:46.916004   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.916014   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:46.916036   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:46.916105   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:46.954124   61989 cri.go:89] found id: ""
	I0924 01:05:46.954154   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.954162   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:46.954168   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:46.954233   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:46.990454   61989 cri.go:89] found id: ""
	I0924 01:05:46.990489   61989 logs.go:276] 0 containers: []
	W0924 01:05:46.990498   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:46.990504   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:46.990573   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:47.024099   61989 cri.go:89] found id: ""
	I0924 01:05:47.024137   61989 logs.go:276] 0 containers: []
	W0924 01:05:47.024150   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:47.024161   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:47.024176   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:47.153050   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:47.153076   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:47.153109   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:47.223472   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:47.223511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:47.267699   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:47.267729   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:47.314741   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:47.314773   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:49.828972   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:49.842301   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:49.842378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:49.874632   61989 cri.go:89] found id: ""
	I0924 01:05:49.874659   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.874669   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:49.874676   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:49.874734   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:49.912500   61989 cri.go:89] found id: ""
	I0924 01:05:49.912524   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.912532   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:49.912543   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:49.912592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:49.947297   61989 cri.go:89] found id: ""
	I0924 01:05:49.947320   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.947328   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:49.947334   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:49.947395   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:49.983863   61989 cri.go:89] found id: ""
	I0924 01:05:49.983892   61989 logs.go:276] 0 containers: []
	W0924 01:05:49.983905   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:49.983915   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:49.983977   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:50.022997   61989 cri.go:89] found id: ""
	I0924 01:05:50.023031   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.023044   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:50.023053   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:50.023109   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:50.057829   61989 cri.go:89] found id: ""
	I0924 01:05:50.057863   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.057875   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:50.057882   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:50.057929   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:50.114599   61989 cri.go:89] found id: ""
	I0924 01:05:50.114620   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.114628   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:50.114633   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:50.114677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:50.147294   61989 cri.go:89] found id: ""
	I0924 01:05:50.147326   61989 logs.go:276] 0 containers: []
	W0924 01:05:50.147334   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:50.147345   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:50.147378   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:50.198362   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:50.198402   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:50.212381   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:50.212415   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:50.286216   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:50.286261   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:50.286279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:50.366794   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:50.366827   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:53.218617   61070 api_server.go:269] stopped: https://192.168.50.161:8443/healthz: Get "https://192.168.50.161:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 01:05:53.218653   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:49.527980   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.027425   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.027780   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:51.908078   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:54.406891   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:52.908167   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:52.922279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:52.922353   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:52.956677   61989 cri.go:89] found id: ""
	I0924 01:05:52.956708   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.956720   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:52.956727   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:52.956778   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:52.990933   61989 cri.go:89] found id: ""
	I0924 01:05:52.990956   61989 logs.go:276] 0 containers: []
	W0924 01:05:52.990964   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:52.990970   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:52.991019   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:53.025729   61989 cri.go:89] found id: ""
	I0924 01:05:53.025758   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.025768   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:53.025778   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:53.025838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:53.060238   61989 cri.go:89] found id: ""
	I0924 01:05:53.060269   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.060279   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:53.060287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:53.060366   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:53.094166   61989 cri.go:89] found id: ""
	I0924 01:05:53.094200   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.094212   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:53.094220   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:53.094289   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:53.129857   61989 cri.go:89] found id: ""
	I0924 01:05:53.129884   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.129892   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:53.129898   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:53.129955   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:53.165857   61989 cri.go:89] found id: ""
	I0924 01:05:53.165890   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.165898   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:53.165909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:53.165970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:53.203884   61989 cri.go:89] found id: ""
	I0924 01:05:53.203909   61989 logs.go:276] 0 containers: []
	W0924 01:05:53.203917   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:53.203926   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:53.203937   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:53.258001   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:53.258035   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:53.271584   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:53.271620   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:53.341791   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:53.341811   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:53.341824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:53.424126   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:53.424170   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:55.962067   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:55.977964   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:55.978042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:56.277329   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0924 01:05:56.277366   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0924 01:05:56.277385   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.302576   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.302628   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:56.715873   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:56.722458   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:56.722487   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.216714   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.224426   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0924 01:05:57.224474   61070 api_server.go:103] status: https://192.168.50.161:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0924 01:05:57.715976   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:05:57.725067   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:05:57.734749   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:05:57.734782   61070 api_server.go:131] duration metric: took 41.019017744s to wait for apiserver health ...
	I0924 01:05:57.734793   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:05:57.734801   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:05:57.736798   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:05:57.738285   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:05:57.750654   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:05:57.778587   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:05:57.804858   61070 system_pods.go:59] 8 kube-system pods found
	I0924 01:05:57.804907   61070 system_pods.go:61] "coredns-7c65d6cfc9-kshwz" [4393c6ec-abd9-42ce-af67-9e8b768bd49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 01:05:57.804917   61070 system_pods.go:61] "etcd-no-preload-674057" [65cf3acb-8ffa-4f83-8ab9-86ddefc5d829] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0924 01:05:57.804932   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [7d26a065-faa1-4ba2-96b7-6c9b1ccb5386] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0924 01:05:57.804940   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [7c5c6602-1749-4f34-bb63-08161baac6db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0924 01:05:57.804949   61070 system_pods.go:61] "kube-proxy-fgmwc" [a81419dc-54f5-4bdd-ac2d-f3f7c85b8f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0924 01:05:57.804955   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [d02c8d9a-1897-4506-8029-9608f11520de] Running
	I0924 01:05:57.804965   61070 system_pods.go:61] "metrics-server-6867b74b74-7gbnr" [6ffa0eb7-21d8-4741-9eae-ce7bb9604dec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:05:57.804975   61070 system_pods.go:61] "storage-provisioner" [a7f99914-8945-4614-afef-d553ea932edf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0924 01:05:57.804984   61070 system_pods.go:74] duration metric: took 26.369156ms to wait for pod list to return data ...
	I0924 01:05:57.804996   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:05:57.809068   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:05:57.809103   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:05:57.809119   61070 node_conditions.go:105] duration metric: took 4.115654ms to run NodePressure ...
	I0924 01:05:57.809137   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 01:05:58.173276   61070 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178398   61070 kubeadm.go:739] kubelet initialised
	I0924 01:05:58.178422   61070 kubeadm.go:740] duration metric: took 5.118555ms waiting for restarted kubelet to initialise ...
	I0924 01:05:58.178429   61070 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:05:58.183646   61070 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:05:56.029030   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.029256   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.407889   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:58.907744   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:05:56.014681   61989 cri.go:89] found id: ""
	I0924 01:05:56.014716   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.014728   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:56.014736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:56.014799   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:56.062547   61989 cri.go:89] found id: ""
	I0924 01:05:56.062576   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.062587   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:56.062606   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:56.062665   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:56.100938   61989 cri.go:89] found id: ""
	I0924 01:05:56.100960   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.100969   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:56.100974   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:56.101039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:56.137694   61989 cri.go:89] found id: ""
	I0924 01:05:56.137722   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.137737   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:56.137744   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:56.137803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:56.174876   61989 cri.go:89] found id: ""
	I0924 01:05:56.174911   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.174923   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:56.174931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:56.174990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:56.208870   61989 cri.go:89] found id: ""
	I0924 01:05:56.208895   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.208905   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:56.208913   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:56.208971   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:56.242476   61989 cri.go:89] found id: ""
	I0924 01:05:56.242508   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.242520   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:56.242528   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:56.242590   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:56.276185   61989 cri.go:89] found id: ""
	I0924 01:05:56.276214   61989 logs.go:276] 0 containers: []
	W0924 01:05:56.276255   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:56.276267   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:56.276284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:56.332755   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:56.332792   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:05:56.346279   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:56.346312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:56.419725   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:56.419751   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:56.419766   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:56.500173   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:56.500208   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.083761   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:05:59.097184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:05:59.097247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:05:59.131734   61989 cri.go:89] found id: ""
	I0924 01:05:59.131764   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.131775   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:05:59.131782   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:05:59.131842   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:05:59.169402   61989 cri.go:89] found id: ""
	I0924 01:05:59.169429   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.169439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:05:59.169446   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:05:59.169521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:05:59.208235   61989 cri.go:89] found id: ""
	I0924 01:05:59.208260   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.208290   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:05:59.208298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:05:59.208372   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:05:59.242314   61989 cri.go:89] found id: ""
	I0924 01:05:59.242345   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.242358   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:05:59.242367   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:05:59.242433   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:05:59.281300   61989 cri.go:89] found id: ""
	I0924 01:05:59.281327   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.281337   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:05:59.281344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:05:59.281407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:05:59.315336   61989 cri.go:89] found id: ""
	I0924 01:05:59.315369   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.315377   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:05:59.315386   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:05:59.315445   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:05:59.347678   61989 cri.go:89] found id: ""
	I0924 01:05:59.347708   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.347718   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:05:59.347726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:05:59.347786   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:05:59.381296   61989 cri.go:89] found id: ""
	I0924 01:05:59.381328   61989 logs.go:276] 0 containers: []
	W0924 01:05:59.381340   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:05:59.381352   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:05:59.381369   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:05:59.462939   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:05:59.462971   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:05:59.462990   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:05:59.544967   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:05:59.545004   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:05:59.585079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:05:59.585106   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:05:59.637897   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:05:59.637940   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:00.190924   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.192627   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.192648   61070 pod_ready.go:82] duration metric: took 4.008971718s for pod "coredns-7c65d6cfc9-kshwz" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.192658   61070 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198586   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:02.198614   61070 pod_ready.go:82] duration metric: took 5.949433ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:02.198627   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205306   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:03.205331   61070 pod_ready.go:82] duration metric: took 1.006696778s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:03.205342   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:00.528770   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.529473   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:01.406620   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:03.407024   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:02.153289   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:02.170582   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:02.170679   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:02.216700   61989 cri.go:89] found id: ""
	I0924 01:06:02.216722   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.216730   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:02.216736   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:02.216793   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:02.292664   61989 cri.go:89] found id: ""
	I0924 01:06:02.292695   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.292706   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:02.292714   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:02.292780   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:02.349447   61989 cri.go:89] found id: ""
	I0924 01:06:02.349470   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.349481   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:02.349487   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:02.349557   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:02.390491   61989 cri.go:89] found id: ""
	I0924 01:06:02.390514   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.390535   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:02.390543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:02.390597   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:02.439330   61989 cri.go:89] found id: ""
	I0924 01:06:02.439355   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.439366   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:02.439373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:02.439432   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:02.476400   61989 cri.go:89] found id: ""
	I0924 01:06:02.476431   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.476439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:02.476445   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:02.476501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:02.511946   61989 cri.go:89] found id: ""
	I0924 01:06:02.511975   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.511983   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:02.511989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:02.512036   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:02.547526   61989 cri.go:89] found id: ""
	I0924 01:06:02.547554   61989 logs.go:276] 0 containers: []
	W0924 01:06:02.547561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:02.547570   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:02.547580   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:02.619784   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:02.619805   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:02.619816   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:02.698597   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:02.698636   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:02.741381   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:02.741419   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:02.797965   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:02.798023   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.312059   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:05.326556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:05.326614   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:05.360973   61989 cri.go:89] found id: ""
	I0924 01:06:05.360999   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.361011   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:05.361018   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:05.361101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:05.394720   61989 cri.go:89] found id: ""
	I0924 01:06:05.394750   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.394760   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:05.394767   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:05.394831   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:05.432564   61989 cri.go:89] found id: ""
	I0924 01:06:05.432592   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.432603   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:05.432611   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:05.432673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:05.465424   61989 cri.go:89] found id: ""
	I0924 01:06:05.465467   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.465478   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:05.465484   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:05.465555   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:05.503656   61989 cri.go:89] found id: ""
	I0924 01:06:05.503684   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.503693   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:05.503699   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:05.503752   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:05.538128   61989 cri.go:89] found id: ""
	I0924 01:06:05.538160   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.538171   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:05.538179   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:05.538248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:05.571310   61989 cri.go:89] found id: ""
	I0924 01:06:05.571336   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.571346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:05.571353   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:05.571416   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:05.604038   61989 cri.go:89] found id: ""
	I0924 01:06:05.604062   61989 logs.go:276] 0 containers: []
	W0924 01:06:05.604070   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:05.604079   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:05.604094   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:05.657025   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:05.657068   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:05.671457   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:05.671483   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:05.747671   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:05.747701   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:05.747718   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:05.833248   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:05.833285   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:05.212622   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.711612   61070 pod_ready.go:103] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.028130   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.527525   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:05.407057   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:07.407341   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.906549   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:08.372029   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:08.386497   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:08.386564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:08.422998   61989 cri.go:89] found id: ""
	I0924 01:06:08.423029   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.423039   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:08.423047   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:08.423095   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:08.457009   61989 cri.go:89] found id: ""
	I0924 01:06:08.457037   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.457047   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:08.457052   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:08.457104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:08.489694   61989 cri.go:89] found id: ""
	I0924 01:06:08.489728   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.489740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:08.489750   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:08.489804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:08.521819   61989 cri.go:89] found id: ""
	I0924 01:06:08.521845   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.521856   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:08.521864   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:08.521922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:08.556422   61989 cri.go:89] found id: ""
	I0924 01:06:08.556453   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.556465   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:08.556472   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:08.556567   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:08.593802   61989 cri.go:89] found id: ""
	I0924 01:06:08.593828   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.593836   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:08.593842   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:08.593932   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:08.627569   61989 cri.go:89] found id: ""
	I0924 01:06:08.627592   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.627600   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:08.627605   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:08.627653   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:08.664728   61989 cri.go:89] found id: ""
	I0924 01:06:08.664758   61989 logs.go:276] 0 containers: []
	W0924 01:06:08.664769   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:08.664780   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:08.664794   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:08.703546   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:08.703577   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:08.755612   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:08.755649   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:08.769957   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:08.769989   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:08.842732   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:08.842762   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:08.842789   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:10.211942   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.211973   61070 pod_ready.go:82] duration metric: took 7.006623705s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.211986   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217219   61070 pod_ready.go:93] pod "kube-proxy-fgmwc" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.217247   61070 pod_ready.go:82] duration metric: took 5.254551ms for pod "kube-proxy-fgmwc" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.217260   61070 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221959   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:06:10.221983   61070 pod_ready.go:82] duration metric: took 4.71607ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:10.221996   61070 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:06:12.227911   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:09.527831   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.527917   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.028599   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.907394   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:14.407242   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:11.427424   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:11.440709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:11.440773   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:11.475537   61989 cri.go:89] found id: ""
	I0924 01:06:11.475564   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.475572   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:11.475577   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:11.475633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:11.512231   61989 cri.go:89] found id: ""
	I0924 01:06:11.512276   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.512285   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:11.512292   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:11.512365   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:11.549809   61989 cri.go:89] found id: ""
	I0924 01:06:11.549840   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.549852   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:11.549858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:11.549924   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:11.587451   61989 cri.go:89] found id: ""
	I0924 01:06:11.587481   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.587493   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:11.587500   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:11.587558   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:11.625109   61989 cri.go:89] found id: ""
	I0924 01:06:11.625135   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.625146   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:11.625154   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:11.625213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:11.660577   61989 cri.go:89] found id: ""
	I0924 01:06:11.660604   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.660616   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:11.660624   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:11.660683   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:11.703527   61989 cri.go:89] found id: ""
	I0924 01:06:11.703557   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.703569   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:11.703577   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:11.703646   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:11.740766   61989 cri.go:89] found id: ""
	I0924 01:06:11.740798   61989 logs.go:276] 0 containers: []
	W0924 01:06:11.740810   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:11.740820   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:11.740836   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:11.803402   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:11.803448   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:11.819144   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:11.819178   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:11.896152   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:11.896173   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:11.896187   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:11.986284   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:11.986340   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.523669   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:14.537923   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:14.537990   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:14.576092   61989 cri.go:89] found id: ""
	I0924 01:06:14.576128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.576140   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:14.576148   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:14.576213   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:14.611985   61989 cri.go:89] found id: ""
	I0924 01:06:14.612020   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.612032   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:14.612039   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:14.612098   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:14.647640   61989 cri.go:89] found id: ""
	I0924 01:06:14.647667   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.647675   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:14.647682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:14.647746   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:14.685089   61989 cri.go:89] found id: ""
	I0924 01:06:14.685128   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.685141   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:14.685150   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:14.685217   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:14.718694   61989 cri.go:89] found id: ""
	I0924 01:06:14.718729   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.718738   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:14.718745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:14.718810   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:14.754874   61989 cri.go:89] found id: ""
	I0924 01:06:14.754916   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.754928   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:14.754936   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:14.754993   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:14.789580   61989 cri.go:89] found id: ""
	I0924 01:06:14.789608   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.789617   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:14.789625   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:14.789677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:14.823173   61989 cri.go:89] found id: ""
	I0924 01:06:14.823201   61989 logs.go:276] 0 containers: []
	W0924 01:06:14.823213   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:14.823224   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:14.823238   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:14.878398   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:14.878431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:14.892466   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:14.892502   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:14.965978   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:14.966010   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:14.966065   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:15.050557   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:15.050600   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:14.231644   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.728219   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.029325   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:18.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:16.907014   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:19.406893   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:17.596915   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:17.609585   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:17.609643   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:17.648275   61989 cri.go:89] found id: ""
	I0924 01:06:17.648305   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.648313   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:17.648319   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:17.648447   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:17.681447   61989 cri.go:89] found id: ""
	I0924 01:06:17.681473   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.681484   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:17.681491   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:17.681552   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:17.719202   61989 cri.go:89] found id: ""
	I0924 01:06:17.719226   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.719234   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:17.719240   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:17.719296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:17.752601   61989 cri.go:89] found id: ""
	I0924 01:06:17.752629   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.752641   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:17.752649   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:17.752700   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:17.789905   61989 cri.go:89] found id: ""
	I0924 01:06:17.789934   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.789945   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:17.789952   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:17.790015   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:17.824174   61989 cri.go:89] found id: ""
	I0924 01:06:17.824205   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.824217   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:17.824237   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:17.824296   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:17.860647   61989 cri.go:89] found id: ""
	I0924 01:06:17.860674   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.860684   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:17.860691   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:17.860750   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:17.896392   61989 cri.go:89] found id: ""
	I0924 01:06:17.896414   61989 logs.go:276] 0 containers: []
	W0924 01:06:17.896423   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:17.896437   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:17.896450   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:17.949230   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:17.949272   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:17.963125   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:17.963183   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:18.035092   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:18.035117   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:18.035134   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:18.117973   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:18.118011   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:20.657044   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:20.669862   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:20.669936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:20.704672   61989 cri.go:89] found id: ""
	I0924 01:06:20.704703   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.704714   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:20.704722   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:20.704785   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:20.745777   61989 cri.go:89] found id: ""
	I0924 01:06:20.745801   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.745811   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:20.745818   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:20.745879   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:20.779673   61989 cri.go:89] found id: ""
	I0924 01:06:20.779704   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.779740   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:20.779749   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:20.779809   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:20.815959   61989 cri.go:89] found id: ""
	I0924 01:06:20.815983   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.815992   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:20.815998   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:20.816055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:20.849203   61989 cri.go:89] found id: ""
	I0924 01:06:20.849232   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.849243   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:20.849251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:20.849319   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:20.884303   61989 cri.go:89] found id: ""
	I0924 01:06:20.884353   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.884365   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:20.884373   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:20.884436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:20.921217   61989 cri.go:89] found id: ""
	I0924 01:06:20.921242   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.921249   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:20.921255   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:20.921302   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:20.957555   61989 cri.go:89] found id: ""
	I0924 01:06:20.957590   61989 logs.go:276] 0 containers: []
	W0924 01:06:20.957601   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:20.957613   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:20.957628   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:20.972591   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:20.972630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:06:18.728553   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.730046   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.228040   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:20.527573   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:22.527695   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:21.406963   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:23.907730   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	W0924 01:06:21.046506   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:21.046532   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:21.046547   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:21.129415   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:21.129453   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:21.168899   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:21.168924   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:23.720925   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:23.736893   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:23.736965   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:23.771874   61989 cri.go:89] found id: ""
	I0924 01:06:23.771901   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.771909   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:23.771915   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:23.771976   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:23.806892   61989 cri.go:89] found id: ""
	I0924 01:06:23.806924   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.806936   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:23.806943   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:23.806999   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:23.843661   61989 cri.go:89] found id: ""
	I0924 01:06:23.843686   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.843694   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:23.843700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:23.843753   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:23.878979   61989 cri.go:89] found id: ""
	I0924 01:06:23.879007   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.879019   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:23.879027   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:23.879086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:23.913893   61989 cri.go:89] found id: ""
	I0924 01:06:23.913916   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.913925   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:23.913937   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:23.913982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:23.947932   61989 cri.go:89] found id: ""
	I0924 01:06:23.947961   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.947972   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:23.947980   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:23.948045   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:23.981366   61989 cri.go:89] found id: ""
	I0924 01:06:23.981391   61989 logs.go:276] 0 containers: []
	W0924 01:06:23.981402   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:23.981409   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:23.981467   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:24.014428   61989 cri.go:89] found id: ""
	I0924 01:06:24.014455   61989 logs.go:276] 0 containers: []
	W0924 01:06:24.014463   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:24.014471   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:24.014485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:24.029585   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:24.029621   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:24.095926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:24.095955   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:24.095975   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:24.174594   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:24.174635   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:24.213286   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:24.213311   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:25.229785   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.729021   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:25.027783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:27.030450   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.406776   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:28.907135   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:26.764740   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:26.777184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:26.777279   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:26.812704   61989 cri.go:89] found id: ""
	I0924 01:06:26.812735   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.812746   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:26.812753   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:26.812811   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:26.849867   61989 cri.go:89] found id: ""
	I0924 01:06:26.849895   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.849904   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:26.849909   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:26.849958   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:26.882856   61989 cri.go:89] found id: ""
	I0924 01:06:26.882878   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.882885   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:26.882891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:26.882936   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:26.921063   61989 cri.go:89] found id: ""
	I0924 01:06:26.921085   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.921094   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:26.921100   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:26.921156   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:26.961154   61989 cri.go:89] found id: ""
	I0924 01:06:26.961182   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.961194   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:26.961200   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:26.961257   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:26.994560   61989 cri.go:89] found id: ""
	I0924 01:06:26.994593   61989 logs.go:276] 0 containers: []
	W0924 01:06:26.994603   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:26.994612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:26.994673   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:27.027967   61989 cri.go:89] found id: ""
	I0924 01:06:27.028013   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.028026   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:27.028033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:27.028096   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:27.063099   61989 cri.go:89] found id: ""
	I0924 01:06:27.063130   61989 logs.go:276] 0 containers: []
	W0924 01:06:27.063142   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:27.063153   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:27.063166   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:27.116237   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:27.116279   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:27.130785   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:27.130815   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:27.201931   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:27.201954   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:27.201970   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:27.282182   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:27.282217   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:29.825403   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:29.838890   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:29.838989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:29.873651   61989 cri.go:89] found id: ""
	I0924 01:06:29.873678   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.873690   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:29.873698   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:29.873758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:29.909894   61989 cri.go:89] found id: ""
	I0924 01:06:29.909916   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.909923   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:29.909929   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:29.909978   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:29.944850   61989 cri.go:89] found id: ""
	I0924 01:06:29.944878   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.944886   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:29.944892   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:29.944945   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:29.981486   61989 cri.go:89] found id: ""
	I0924 01:06:29.981515   61989 logs.go:276] 0 containers: []
	W0924 01:06:29.981524   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:29.981532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:29.981592   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:30.015138   61989 cri.go:89] found id: ""
	I0924 01:06:30.015165   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.015176   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:30.015184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:30.015256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:30.051777   61989 cri.go:89] found id: ""
	I0924 01:06:30.051814   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.051825   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:30.051834   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:30.051898   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:30.085573   61989 cri.go:89] found id: ""
	I0924 01:06:30.085598   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.085607   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:30.085612   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:30.085661   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:30.122518   61989 cri.go:89] found id: ""
	I0924 01:06:30.122551   61989 logs.go:276] 0 containers: []
	W0924 01:06:30.122561   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:30.122570   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:30.122585   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:30.199075   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:30.199118   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:30.238259   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:30.238293   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:30.292145   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:30.292185   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:30.306404   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:30.306431   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:30.373959   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:29.729379   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.228691   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:29.527089   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:31.527523   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:34.027357   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:30.907575   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:33.407615   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:32.875041   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:32.888358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:32.888435   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:32.924466   61989 cri.go:89] found id: ""
	I0924 01:06:32.924499   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.924519   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:32.924528   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:32.924584   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:32.960188   61989 cri.go:89] found id: ""
	I0924 01:06:32.960216   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.960224   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:32.960231   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:32.960282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:32.997612   61989 cri.go:89] found id: ""
	I0924 01:06:32.997641   61989 logs.go:276] 0 containers: []
	W0924 01:06:32.997649   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:32.997655   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:32.997704   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:33.034282   61989 cri.go:89] found id: ""
	I0924 01:06:33.034310   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.034317   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:33.034325   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:33.034381   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:33.073832   61989 cri.go:89] found id: ""
	I0924 01:06:33.073861   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.073870   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:33.073875   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:33.073959   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:33.107276   61989 cri.go:89] found id: ""
	I0924 01:06:33.107303   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.107314   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:33.107323   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:33.107373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:33.141062   61989 cri.go:89] found id: ""
	I0924 01:06:33.141091   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.141104   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:33.141112   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:33.141174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:33.177874   61989 cri.go:89] found id: ""
	I0924 01:06:33.177899   61989 logs.go:276] 0 containers: []
	W0924 01:06:33.177908   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:33.177916   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:33.177927   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:33.228324   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:33.228373   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:33.241324   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:33.241350   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:33.313115   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:33.313139   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:33.313151   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:33.392458   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:33.392512   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:35.932822   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:35.945918   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:35.945987   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:34.727948   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.728560   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:36.028536   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:38.527308   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.906501   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:37.907165   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:35.984400   61989 cri.go:89] found id: ""
	I0924 01:06:35.984438   61989 logs.go:276] 0 containers: []
	W0924 01:06:35.984448   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:35.984456   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:35.984528   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:36.022208   61989 cri.go:89] found id: ""
	I0924 01:06:36.022235   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.022244   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:36.022252   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:36.022336   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:36.059153   61989 cri.go:89] found id: ""
	I0924 01:06:36.059176   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.059184   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:36.059190   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:36.059247   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:36.094375   61989 cri.go:89] found id: ""
	I0924 01:06:36.094413   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.094425   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:36.094434   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:36.094490   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:36.128662   61989 cri.go:89] found id: ""
	I0924 01:06:36.128691   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.128702   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:36.128710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:36.128762   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:36.160898   61989 cri.go:89] found id: ""
	I0924 01:06:36.160925   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.160937   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:36.160945   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:36.161010   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:36.194421   61989 cri.go:89] found id: ""
	I0924 01:06:36.194448   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.194460   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:36.194468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:36.194537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:36.230448   61989 cri.go:89] found id: ""
	I0924 01:06:36.230477   61989 logs.go:276] 0 containers: []
	W0924 01:06:36.230487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:36.230498   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:36.230511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:36.303029   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:36.303053   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:36.303067   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:36.406305   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:36.406338   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:36.444044   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:36.444084   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:36.494829   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:36.494873   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.009579   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:39.023867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:39.023943   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:39.057426   61989 cri.go:89] found id: ""
	I0924 01:06:39.057458   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.057469   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:39.057477   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:39.057539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:39.091421   61989 cri.go:89] found id: ""
	I0924 01:06:39.091444   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.091453   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:39.091459   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:39.091518   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:39.125407   61989 cri.go:89] found id: ""
	I0924 01:06:39.125437   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.125448   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:39.125455   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:39.125525   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:39.157146   61989 cri.go:89] found id: ""
	I0924 01:06:39.157170   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.157181   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:39.157189   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:39.157248   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:39.189474   61989 cri.go:89] found id: ""
	I0924 01:06:39.189501   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.189511   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:39.189518   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:39.189577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:39.228034   61989 cri.go:89] found id: ""
	I0924 01:06:39.228063   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.228084   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:39.228099   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:39.228158   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:39.268289   61989 cri.go:89] found id: ""
	I0924 01:06:39.268317   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.268345   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:39.268354   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:39.268431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:39.304964   61989 cri.go:89] found id: ""
	I0924 01:06:39.304988   61989 logs.go:276] 0 containers: []
	W0924 01:06:39.304996   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:39.305005   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:39.305017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:39.356193   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:39.356234   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:39.370782   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:39.370807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:39.442395   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:39.442418   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:39.442429   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:39.518426   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:39.518466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:38.729606   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:41.228528   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.528236   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:43.028285   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:40.407021   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.906884   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:44.907822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:42.059895   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:42.092776   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:42.092837   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:42.128508   61989 cri.go:89] found id: ""
	I0924 01:06:42.128534   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.128555   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:42.128565   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:42.128623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:42.160961   61989 cri.go:89] found id: ""
	I0924 01:06:42.160989   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.161000   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:42.161008   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:42.161072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:42.194212   61989 cri.go:89] found id: ""
	I0924 01:06:42.194260   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.194272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:42.194280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:42.194342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:42.229284   61989 cri.go:89] found id: ""
	I0924 01:06:42.229312   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.229323   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:42.229331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:42.229378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:42.261952   61989 cri.go:89] found id: ""
	I0924 01:06:42.261986   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.261997   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:42.262010   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:42.262059   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:42.297096   61989 cri.go:89] found id: ""
	I0924 01:06:42.297125   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.297133   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:42.297139   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:42.297185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:42.333066   61989 cri.go:89] found id: ""
	I0924 01:06:42.333095   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.333106   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:42.333114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:42.333176   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:42.366798   61989 cri.go:89] found id: ""
	I0924 01:06:42.366829   61989 logs.go:276] 0 containers: []
	W0924 01:06:42.366840   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:42.366852   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:42.366865   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:42.419424   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:42.419466   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:42.433814   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:42.433846   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:42.503817   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:42.503845   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:42.503860   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:42.583249   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:42.583289   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:45.123746   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:45.136292   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:45.136377   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:45.174390   61989 cri.go:89] found id: ""
	I0924 01:06:45.174420   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.174441   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:45.174449   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:45.174539   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:45.212394   61989 cri.go:89] found id: ""
	I0924 01:06:45.212422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.212433   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:45.212441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:45.212503   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:45.245831   61989 cri.go:89] found id: ""
	I0924 01:06:45.245853   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.245861   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:45.245867   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:45.245922   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:45.277587   61989 cri.go:89] found id: ""
	I0924 01:06:45.277615   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.277626   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:45.277634   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:45.277692   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:45.309715   61989 cri.go:89] found id: ""
	I0924 01:06:45.309749   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.309760   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:45.309768   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:45.309827   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:45.342799   61989 cri.go:89] found id: ""
	I0924 01:06:45.342831   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.342844   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:45.342853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:45.342921   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:45.375377   61989 cri.go:89] found id: ""
	I0924 01:06:45.375404   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.375415   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:45.375423   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:45.375484   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:45.415395   61989 cri.go:89] found id: ""
	I0924 01:06:45.415422   61989 logs.go:276] 0 containers: []
	W0924 01:06:45.415432   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:45.415444   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:45.415459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:45.464381   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:45.464416   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:45.478142   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:45.478168   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:45.551211   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:45.551234   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:45.551244   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:45.635255   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:45.635297   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:43.728645   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:46.227611   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.228320   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:45.028650   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.528968   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:47.406822   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:49.407790   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:48.173687   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:48.186635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:48.186710   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:48.219544   61989 cri.go:89] found id: ""
	I0924 01:06:48.219566   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.219574   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:48.219583   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:48.219654   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:48.253594   61989 cri.go:89] found id: ""
	I0924 01:06:48.253618   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.253627   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:48.253634   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:48.253693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:48.287991   61989 cri.go:89] found id: ""
	I0924 01:06:48.288019   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.288031   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:48.288041   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:48.288100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:48.320738   61989 cri.go:89] found id: ""
	I0924 01:06:48.320767   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.320779   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:48.320787   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:48.320847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:48.352197   61989 cri.go:89] found id: ""
	I0924 01:06:48.352225   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.352233   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:48.352243   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:48.352317   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:48.386157   61989 cri.go:89] found id: ""
	I0924 01:06:48.386187   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.386195   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:48.386202   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:48.386250   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:48.422372   61989 cri.go:89] found id: ""
	I0924 01:06:48.422398   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.422407   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:48.422413   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:48.422463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:48.464007   61989 cri.go:89] found id: ""
	I0924 01:06:48.464032   61989 logs.go:276] 0 containers: []
	W0924 01:06:48.464043   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:48.464054   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:48.464072   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:48.520533   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:48.520570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:48.594453   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:48.594489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:48.607309   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:48.607336   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:48.674078   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:48.674102   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:48.674117   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:50.740093   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.228567   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:50.028640   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:52.527656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.906378   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:53.906887   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:51.256855   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:51.270305   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:51.270378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:51.303450   61989 cri.go:89] found id: ""
	I0924 01:06:51.303487   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.303499   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:51.303508   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:51.303564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:51.336959   61989 cri.go:89] found id: ""
	I0924 01:06:51.336987   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.337003   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:51.337010   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:51.337072   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:51.369210   61989 cri.go:89] found id: ""
	I0924 01:06:51.369239   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.369249   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:51.369260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:51.369339   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:51.403595   61989 cri.go:89] found id: ""
	I0924 01:06:51.403645   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.403658   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:51.403666   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:51.403723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:51.445459   61989 cri.go:89] found id: ""
	I0924 01:06:51.445493   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.445503   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:51.445510   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:51.445574   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:51.477615   61989 cri.go:89] found id: ""
	I0924 01:06:51.477642   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.477653   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:51.477660   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:51.477722   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:51.509737   61989 cri.go:89] found id: ""
	I0924 01:06:51.509766   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.509784   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:51.509792   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:51.509856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:51.546451   61989 cri.go:89] found id: ""
	I0924 01:06:51.546479   61989 logs.go:276] 0 containers: []
	W0924 01:06:51.546489   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:51.546501   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:51.546515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:51.600277   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:51.600315   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:51.613403   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:51.613434   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:51.691645   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:51.691669   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:51.691688   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:51.772276   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:51.772312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:54.313491   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:54.328265   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:54.328374   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:54.368091   61989 cri.go:89] found id: ""
	I0924 01:06:54.368117   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.368126   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:54.368131   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:54.368183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:54.408272   61989 cri.go:89] found id: ""
	I0924 01:06:54.408300   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.408310   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:54.408318   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:54.408409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:54.460467   61989 cri.go:89] found id: ""
	I0924 01:06:54.460489   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.460499   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:54.460506   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:54.460564   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:54.493310   61989 cri.go:89] found id: ""
	I0924 01:06:54.493334   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.493343   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:54.493349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:54.493401   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:54.526772   61989 cri.go:89] found id: ""
	I0924 01:06:54.526799   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.526809   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:54.526817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:54.526880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:54.562235   61989 cri.go:89] found id: ""
	I0924 01:06:54.562264   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.562274   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:54.562283   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:54.562345   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:54.597755   61989 cri.go:89] found id: ""
	I0924 01:06:54.597784   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.597794   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:54.597803   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:54.597851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:54.632225   61989 cri.go:89] found id: ""
	I0924 01:06:54.632282   61989 logs.go:276] 0 containers: []
	W0924 01:06:54.632295   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:54.632305   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:54.632321   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:54.683849   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:54.683887   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:54.697395   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:54.697425   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:54.767577   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:54.767598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:54.767609   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:54.842619   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:54.842655   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:06:55.728756   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:58.228520   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:54.528783   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.028039   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:59.028234   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:55.907673   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.907858   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:06:57.381394   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:06:57.394078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:06:57.394147   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:06:57.431241   61989 cri.go:89] found id: ""
	I0924 01:06:57.431266   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.431278   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:06:57.431284   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:06:57.431352   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:06:57.468954   61989 cri.go:89] found id: ""
	I0924 01:06:57.468983   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.468994   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:06:57.469001   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:06:57.469060   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:06:57.503518   61989 cri.go:89] found id: ""
	I0924 01:06:57.503550   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.503562   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:06:57.503570   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:06:57.503618   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:06:57.540432   61989 cri.go:89] found id: ""
	I0924 01:06:57.540464   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.540475   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:06:57.540483   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:06:57.540548   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:06:57.574142   61989 cri.go:89] found id: ""
	I0924 01:06:57.574175   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.574187   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:06:57.574195   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:06:57.574264   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:06:57.608505   61989 cri.go:89] found id: ""
	I0924 01:06:57.608528   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.608537   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:06:57.608543   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:06:57.608589   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:06:57.644273   61989 cri.go:89] found id: ""
	I0924 01:06:57.644305   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.644317   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:06:57.644344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:06:57.644409   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:06:57.682023   61989 cri.go:89] found id: ""
	I0924 01:06:57.682050   61989 logs.go:276] 0 containers: []
	W0924 01:06:57.682060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:06:57.682072   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:06:57.682086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:06:57.732537   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:06:57.732570   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:06:57.746632   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:06:57.746663   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:06:57.813904   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:06:57.813927   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:06:57.813947   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:06:57.891947   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:06:57.891992   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.432035   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:00.444886   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:00.444966   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:00.482653   61989 cri.go:89] found id: ""
	I0924 01:07:00.482683   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.482694   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:00.482702   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:00.482754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:00.516404   61989 cri.go:89] found id: ""
	I0924 01:07:00.516441   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.516452   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:00.516463   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:00.516527   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:00.552938   61989 cri.go:89] found id: ""
	I0924 01:07:00.552963   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.552971   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:00.552977   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:00.553043   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:00.589143   61989 cri.go:89] found id: ""
	I0924 01:07:00.589170   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.589178   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:00.589184   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:00.589235   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:00.625023   61989 cri.go:89] found id: ""
	I0924 01:07:00.625047   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.625059   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:00.625066   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:00.625127   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:00.662904   61989 cri.go:89] found id: ""
	I0924 01:07:00.662936   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.662948   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:00.662959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:00.663022   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:00.702892   61989 cri.go:89] found id: ""
	I0924 01:07:00.702921   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.702932   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:00.702938   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:00.702988   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:00.737010   61989 cri.go:89] found id: ""
	I0924 01:07:00.737039   61989 logs.go:276] 0 containers: []
	W0924 01:07:00.737050   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:00.737061   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:00.737075   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:00.788093   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:00.788132   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:00.801354   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:00.801382   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:00.866830   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:00.866862   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:00.866878   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:00.950034   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:00.950076   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:00.728279   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.227980   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:01.527849   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.027729   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:00.406445   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:02.407048   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:04.907569   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:03.492773   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:03.506158   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:03.506224   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:03.542369   61989 cri.go:89] found id: ""
	I0924 01:07:03.542397   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.542408   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:03.542416   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:03.542473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:03.575019   61989 cri.go:89] found id: ""
	I0924 01:07:03.575046   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.575055   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:03.575060   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:03.575103   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:03.608576   61989 cri.go:89] found id: ""
	I0924 01:07:03.608603   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.608612   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:03.608619   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:03.608684   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:03.642359   61989 cri.go:89] found id: ""
	I0924 01:07:03.642389   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.642400   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:03.642407   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:03.642463   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:03.678192   61989 cri.go:89] found id: ""
	I0924 01:07:03.678216   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.678223   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:03.678229   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:03.678285   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:03.711773   61989 cri.go:89] found id: ""
	I0924 01:07:03.711795   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.711803   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:03.711809   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:03.711856   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:03.747792   61989 cri.go:89] found id: ""
	I0924 01:07:03.747819   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.747830   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:03.747838   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:03.747901   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:03.783284   61989 cri.go:89] found id: ""
	I0924 01:07:03.783312   61989 logs.go:276] 0 containers: []
	W0924 01:07:03.783320   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:03.783331   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:03.783349   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:03.838704   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:03.838745   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:03.852650   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:03.852675   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:03.922474   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:03.922499   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:03.922511   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:03.997349   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:03.997388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:05.228357   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:07.228789   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.028604   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:08.527156   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.908041   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:09.406803   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:06.537182   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:06.549745   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:06.549833   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:06.587879   61989 cri.go:89] found id: ""
	I0924 01:07:06.587910   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.587922   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:06.587930   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:06.587984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:06.623419   61989 cri.go:89] found id: ""
	I0924 01:07:06.623447   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.623456   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:06.623462   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:06.623542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:06.659228   61989 cri.go:89] found id: ""
	I0924 01:07:06.659260   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.659272   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:06.659280   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:06.659341   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:06.693300   61989 cri.go:89] found id: ""
	I0924 01:07:06.693330   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.693341   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:06.693349   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:06.693399   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:06.726237   61989 cri.go:89] found id: ""
	I0924 01:07:06.726267   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.726278   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:06.726286   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:06.726342   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:06.760627   61989 cri.go:89] found id: ""
	I0924 01:07:06.760659   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.760670   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:06.760677   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:06.760745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:06.796029   61989 cri.go:89] found id: ""
	I0924 01:07:06.796062   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.796073   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:06.796081   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:06.796136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:06.830197   61989 cri.go:89] found id: ""
	I0924 01:07:06.830230   61989 logs.go:276] 0 containers: []
	W0924 01:07:06.830241   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:06.830251   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:06.830265   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:06.869055   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:06.869087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:06.923840   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:06.923888   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:06.937510   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:06.937549   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:07.011461   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:07.011482   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:07.011496   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:09.591186   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:09.603900   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:09.603970   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:09.639003   61989 cri.go:89] found id: ""
	I0924 01:07:09.639035   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.639046   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:09.639055   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:09.639111   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:09.676494   61989 cri.go:89] found id: ""
	I0924 01:07:09.676528   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.676539   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:09.676547   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:09.676616   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:09.713080   61989 cri.go:89] found id: ""
	I0924 01:07:09.713103   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.713111   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:09.713117   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:09.713174   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:09.748425   61989 cri.go:89] found id: ""
	I0924 01:07:09.748449   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.748458   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:09.748465   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:09.748521   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:09.782526   61989 cri.go:89] found id: ""
	I0924 01:07:09.782559   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.782576   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:09.782584   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:09.782647   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:09.819137   61989 cri.go:89] found id: ""
	I0924 01:07:09.819159   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.819167   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:09.819173   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:09.819256   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:09.852953   61989 cri.go:89] found id: ""
	I0924 01:07:09.852976   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.852984   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:09.852989   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:09.853083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:09.887254   61989 cri.go:89] found id: ""
	I0924 01:07:09.887282   61989 logs.go:276] 0 containers: []
	W0924 01:07:09.887293   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:09.887304   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:09.887318   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:09.940029   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:09.940069   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:09.954298   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:09.954331   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:10.028926   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:10.028947   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:10.028957   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:10.116722   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:10.116761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:09.728996   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.228342   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:10.527637   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.528324   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:11.410452   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:13.906451   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:12.654245   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:12.668635   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:12.668695   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:12.711575   61989 cri.go:89] found id: ""
	I0924 01:07:12.711601   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.711626   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:12.711632   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:12.711682   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:12.746104   61989 cri.go:89] found id: ""
	I0924 01:07:12.746131   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.746141   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:12.746149   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:12.746210   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:12.780229   61989 cri.go:89] found id: ""
	I0924 01:07:12.780260   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.780295   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:12.780303   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:12.780384   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:12.812968   61989 cri.go:89] found id: ""
	I0924 01:07:12.812998   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.813010   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:12.813024   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:12.813090   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:12.844212   61989 cri.go:89] found id: ""
	I0924 01:07:12.844241   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.844253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:12.844260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:12.844343   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:12.878662   61989 cri.go:89] found id: ""
	I0924 01:07:12.878690   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.878700   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:12.878707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:12.878765   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:12.912782   61989 cri.go:89] found id: ""
	I0924 01:07:12.912805   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.912815   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:12.912822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:12.912883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:12.945694   61989 cri.go:89] found id: ""
	I0924 01:07:12.945726   61989 logs.go:276] 0 containers: []
	W0924 01:07:12.945736   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:12.945747   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:12.945761   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:12.994841   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:12.994877   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:13.009582   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:13.009624   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:13.081972   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:13.081999   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:13.082017   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:13.162383   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:13.162420   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:15.704586   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:15.717608   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:15.717677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:15.751794   61989 cri.go:89] found id: ""
	I0924 01:07:15.751829   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.751840   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:15.751848   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:15.751916   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:15.791691   61989 cri.go:89] found id: ""
	I0924 01:07:15.791723   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.791734   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:15.791742   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:15.791805   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:15.827934   61989 cri.go:89] found id: ""
	I0924 01:07:15.827957   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.827965   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:15.827971   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:15.828017   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:15.862489   61989 cri.go:89] found id: ""
	I0924 01:07:15.862518   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.862527   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:15.862532   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:15.862577   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:15.896754   61989 cri.go:89] found id: ""
	I0924 01:07:15.896786   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.896798   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:15.896804   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:15.896857   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:15.934353   61989 cri.go:89] found id: ""
	I0924 01:07:15.934378   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.934386   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:15.934392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:15.934436   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:15.969204   61989 cri.go:89] found id: ""
	I0924 01:07:15.969237   61989 logs.go:276] 0 containers: []
	W0924 01:07:15.969246   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:15.969251   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:15.969309   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:14.228949   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.728382   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.027681   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:17.027847   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:15.907872   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:18.407563   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:16.008733   61989 cri.go:89] found id: ""
	I0924 01:07:16.008767   61989 logs.go:276] 0 containers: []
	W0924 01:07:16.008780   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:16.008792   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:16.008807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:16.046993   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:16.047024   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:16.098768   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:16.098801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:16.114429   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:16.114472   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:16.187450   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:16.187469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:16.187489   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:18.767042   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:18.779825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:18.779899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:18.815410   61989 cri.go:89] found id: ""
	I0924 01:07:18.815436   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.815447   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:18.815454   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:18.815523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:18.849837   61989 cri.go:89] found id: ""
	I0924 01:07:18.849862   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.849872   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:18.849880   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:18.849952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:18.885183   61989 cri.go:89] found id: ""
	I0924 01:07:18.885215   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.885227   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:18.885235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:18.885314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:18.922263   61989 cri.go:89] found id: ""
	I0924 01:07:18.922293   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.922305   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:18.922312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:18.922378   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:18.957235   61989 cri.go:89] found id: ""
	I0924 01:07:18.957263   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.957272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:18.957278   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:18.957331   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:18.989846   61989 cri.go:89] found id: ""
	I0924 01:07:18.989870   61989 logs.go:276] 0 containers: []
	W0924 01:07:18.989878   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:18.989884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:18.989931   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:19.027264   61989 cri.go:89] found id: ""
	I0924 01:07:19.027298   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.027308   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:19.027315   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:19.027373   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:19.065902   61989 cri.go:89] found id: ""
	I0924 01:07:19.065925   61989 logs.go:276] 0 containers: []
	W0924 01:07:19.065934   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:19.065944   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:19.065959   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:19.115515   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:19.115550   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:19.129761   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:19.129787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:19.200299   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:19.200319   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:19.200351   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:19.282308   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:19.282360   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:18.732314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.227773   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.228957   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:19.528117   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:22.028965   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:20.906860   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:23.407404   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:21.819442   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:21.834106   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:21.834165   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:21.866953   61989 cri.go:89] found id: ""
	I0924 01:07:21.866988   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.866999   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:21.867008   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:21.867085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:21.902561   61989 cri.go:89] found id: ""
	I0924 01:07:21.902637   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.902654   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:21.902663   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:21.902729   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:21.936883   61989 cri.go:89] found id: ""
	I0924 01:07:21.936926   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.936937   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:21.936943   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:21.936995   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:21.975375   61989 cri.go:89] found id: ""
	I0924 01:07:21.975402   61989 logs.go:276] 0 containers: []
	W0924 01:07:21.975411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:21.975417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:21.975465   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:22.012782   61989 cri.go:89] found id: ""
	I0924 01:07:22.012811   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.012822   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:22.012830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:22.012890   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:22.049344   61989 cri.go:89] found id: ""
	I0924 01:07:22.049370   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.049379   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:22.049385   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:22.049442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:22.088187   61989 cri.go:89] found id: ""
	I0924 01:07:22.088219   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.088230   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:22.088239   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:22.088324   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:22.123357   61989 cri.go:89] found id: ""
	I0924 01:07:22.123386   61989 logs.go:276] 0 containers: []
	W0924 01:07:22.123397   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:22.123408   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:22.123423   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:22.176794   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:22.176828   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:22.192550   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:22.192591   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:22.263854   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:22.263881   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:22.263898   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:22.341735   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:22.341778   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:24.879834   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:24.892429   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:24.892504   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:24.926600   61989 cri.go:89] found id: ""
	I0924 01:07:24.926629   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.926636   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:24.926642   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:24.926689   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:24.960370   61989 cri.go:89] found id: ""
	I0924 01:07:24.960399   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.960408   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:24.960415   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:24.960471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:24.993503   61989 cri.go:89] found id: ""
	I0924 01:07:24.993532   61989 logs.go:276] 0 containers: []
	W0924 01:07:24.993542   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:24.993549   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:24.993611   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:25.028027   61989 cri.go:89] found id: ""
	I0924 01:07:25.028055   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.028065   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:25.028073   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:25.028129   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:25.062947   61989 cri.go:89] found id: ""
	I0924 01:07:25.062981   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.062999   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:25.063009   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:25.063077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:25.098895   61989 cri.go:89] found id: ""
	I0924 01:07:25.098927   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.098939   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:25.098946   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:25.098996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:25.132786   61989 cri.go:89] found id: ""
	I0924 01:07:25.132814   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.132824   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:25.132830   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:25.132882   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:25.167603   61989 cri.go:89] found id: ""
	I0924 01:07:25.167634   61989 logs.go:276] 0 containers: []
	W0924 01:07:25.167644   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:25.167656   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:25.167671   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:25.220265   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:25.220303   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:25.234840   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:25.234884   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:25.307459   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:25.307485   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:25.307499   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:25.386496   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:25.386537   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:25.229188   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.728978   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:24.531829   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.027182   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:29.029000   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:25.907018   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:28.406555   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:27.926064   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:27.939398   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:27.939480   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:27.976184   61989 cri.go:89] found id: ""
	I0924 01:07:27.976215   61989 logs.go:276] 0 containers: []
	W0924 01:07:27.976256   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:27.976265   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:27.976348   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:28.009389   61989 cri.go:89] found id: ""
	I0924 01:07:28.009419   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.009431   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:28.009438   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:28.009501   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:28.045562   61989 cri.go:89] found id: ""
	I0924 01:07:28.045594   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.045605   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:28.045613   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:28.045677   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:28.085318   61989 cri.go:89] found id: ""
	I0924 01:07:28.085345   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.085357   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:28.085364   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:28.085419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:28.119582   61989 cri.go:89] found id: ""
	I0924 01:07:28.119607   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.119617   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:28.119626   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:28.119690   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:28.151445   61989 cri.go:89] found id: ""
	I0924 01:07:28.151493   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.151505   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:28.151513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:28.151578   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:28.185966   61989 cri.go:89] found id: ""
	I0924 01:07:28.185997   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.186009   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:28.186016   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:28.186078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:28.219012   61989 cri.go:89] found id: ""
	I0924 01:07:28.219037   61989 logs.go:276] 0 containers: []
	W0924 01:07:28.219044   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:28.219052   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:28.219089   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:28.272186   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:28.272222   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:28.286346   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:28.286383   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:28.370949   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:28.370975   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:28.370985   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:28.453740   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:28.453775   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:30.229141   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.728919   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:31.527080   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.028315   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.407040   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:32.407075   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:34.407711   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:30.993536   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:31.006297   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:31.006369   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:31.042081   61989 cri.go:89] found id: ""
	I0924 01:07:31.042114   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.042123   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:31.042129   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:31.042185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:31.077119   61989 cri.go:89] found id: ""
	I0924 01:07:31.077144   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.077153   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:31.077159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:31.077208   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:31.110148   61989 cri.go:89] found id: ""
	I0924 01:07:31.110179   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.110187   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:31.110193   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:31.110246   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:31.143551   61989 cri.go:89] found id: ""
	I0924 01:07:31.143578   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.143585   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:31.143591   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:31.143638   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:31.177212   61989 cri.go:89] found id: ""
	I0924 01:07:31.177262   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.177272   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:31.177279   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:31.177329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:31.209290   61989 cri.go:89] found id: ""
	I0924 01:07:31.209321   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.209332   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:31.209340   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:31.209398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:31.247299   61989 cri.go:89] found id: ""
	I0924 01:07:31.247334   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.247346   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:31.247355   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:31.247419   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:31.285010   61989 cri.go:89] found id: ""
	I0924 01:07:31.285047   61989 logs.go:276] 0 containers: []
	W0924 01:07:31.285060   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:31.285072   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:31.285087   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:31.323819   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:31.323855   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:31.378348   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:31.378388   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:31.393944   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:31.393983   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:31.464940   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:31.464966   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:31.464978   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.042144   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:34.055183   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:34.055268   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:34.103044   61989 cri.go:89] found id: ""
	I0924 01:07:34.103075   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.103086   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:34.103094   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:34.103162   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:34.141379   61989 cri.go:89] found id: ""
	I0924 01:07:34.141412   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.141424   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:34.141432   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:34.141493   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:34.179545   61989 cri.go:89] found id: ""
	I0924 01:07:34.179574   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.179582   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:34.179588   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:34.179655   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:34.217683   61989 cri.go:89] found id: ""
	I0924 01:07:34.217719   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.217739   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:34.217748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:34.217806   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:34.257597   61989 cri.go:89] found id: ""
	I0924 01:07:34.257630   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.257642   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:34.257651   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:34.257723   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:34.295410   61989 cri.go:89] found id: ""
	I0924 01:07:34.295440   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.295452   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:34.295460   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:34.295523   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:34.331309   61989 cri.go:89] found id: ""
	I0924 01:07:34.331340   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.331350   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:34.331358   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:34.331460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:34.367549   61989 cri.go:89] found id: ""
	I0924 01:07:34.367580   61989 logs.go:276] 0 containers: []
	W0924 01:07:34.367590   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:34.367601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:34.367615   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:34.421785   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:34.421823   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:34.435162   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:34.435198   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:34.504051   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:34.504073   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:34.504090   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:34.582343   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:34.582384   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:35.229391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.229522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.527047   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.527472   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:36.906974   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:38.907529   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:37.124727   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:37.139374   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:37.139431   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:37.176474   61989 cri.go:89] found id: ""
	I0924 01:07:37.176500   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.176510   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:37.176515   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:37.176560   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:37.209944   61989 cri.go:89] found id: ""
	I0924 01:07:37.209971   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.209983   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:37.209990   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:37.210055   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:37.242894   61989 cri.go:89] found id: ""
	I0924 01:07:37.242923   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.242933   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:37.242941   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:37.242996   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:37.276517   61989 cri.go:89] found id: ""
	I0924 01:07:37.276547   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.276558   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:37.276566   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:37.276626   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:37.310169   61989 cri.go:89] found id: ""
	I0924 01:07:37.310196   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.310207   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:37.310214   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:37.310282   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:37.342992   61989 cri.go:89] found id: ""
	I0924 01:07:37.343019   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.343027   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:37.343035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:37.343088   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:37.375024   61989 cri.go:89] found id: ""
	I0924 01:07:37.375051   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.375062   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:37.375069   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:37.375137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:37.409736   61989 cri.go:89] found id: ""
	I0924 01:07:37.409761   61989 logs.go:276] 0 containers: []
	W0924 01:07:37.409768   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:37.409776   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:37.409787   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:37.474744   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:37.474767   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:37.474783   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:37.551479   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:37.551515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:37.590597   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:37.590632   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:37.642781   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:37.642820   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.156480   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:40.171002   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:40.171079   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:40.207383   61989 cri.go:89] found id: ""
	I0924 01:07:40.207410   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.207418   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:40.207424   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:40.207474   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:40.245535   61989 cri.go:89] found id: ""
	I0924 01:07:40.245560   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.245568   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:40.245574   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:40.245620   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:40.283858   61989 cri.go:89] found id: ""
	I0924 01:07:40.283888   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.283900   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:40.283909   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:40.283982   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:40.320527   61989 cri.go:89] found id: ""
	I0924 01:07:40.320555   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.320566   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:40.320575   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:40.320633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:40.354364   61989 cri.go:89] found id: ""
	I0924 01:07:40.354390   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.354397   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:40.354403   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:40.354473   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:40.388407   61989 cri.go:89] found id: ""
	I0924 01:07:40.388431   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.388439   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:40.388444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:40.388512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:40.423809   61989 cri.go:89] found id: ""
	I0924 01:07:40.423838   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.423847   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:40.423853   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:40.423908   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:40.459160   61989 cri.go:89] found id: ""
	I0924 01:07:40.459188   61989 logs.go:276] 0 containers: []
	W0924 01:07:40.459199   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:40.459210   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:40.459223   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:40.530418   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:40.530456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:40.551644   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:40.551683   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:40.634564   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:40.634587   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:40.634599   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:40.717897   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:40.717934   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:39.728642   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.728725   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:40.528294   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.028364   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:41.406835   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.907015   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:43.257992   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:43.272134   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:43.272204   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:43.306747   61989 cri.go:89] found id: ""
	I0924 01:07:43.306775   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.306797   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:43.306806   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:43.306923   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:43.342922   61989 cri.go:89] found id: ""
	I0924 01:07:43.342954   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.342963   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:43.342974   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:43.343028   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:43.378666   61989 cri.go:89] found id: ""
	I0924 01:07:43.378694   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.378703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:43.378709   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:43.378760   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:43.414348   61989 cri.go:89] found id: ""
	I0924 01:07:43.414376   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.414387   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:43.414395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:43.414457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:43.447687   61989 cri.go:89] found id: ""
	I0924 01:07:43.447718   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.447728   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:43.447735   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:43.447804   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:43.482166   61989 cri.go:89] found id: ""
	I0924 01:07:43.482195   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.482205   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:43.482211   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:43.482275   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:43.518112   61989 cri.go:89] found id: ""
	I0924 01:07:43.518146   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.518159   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:43.518167   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:43.518231   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:43.553853   61989 cri.go:89] found id: ""
	I0924 01:07:43.553875   61989 logs.go:276] 0 containers: []
	W0924 01:07:43.553883   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:43.553891   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:43.553902   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:43.603410   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:43.603445   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:43.616413   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:43.616438   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:43.685077   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:43.685101   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:43.685113   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:43.760758   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:43.760803   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:43.729237   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.228084   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.228503   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:45.527095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:47.529540   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.407150   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:48.407253   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:46.300532   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:46.315982   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:46.316050   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:46.356523   61989 cri.go:89] found id: ""
	I0924 01:07:46.356554   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.356565   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:46.356573   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:46.356633   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:46.405399   61989 cri.go:89] found id: ""
	I0924 01:07:46.405429   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.405439   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:46.405447   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:46.405512   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:46.454819   61989 cri.go:89] found id: ""
	I0924 01:07:46.454844   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.454853   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:46.454858   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:46.454918   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:46.499094   61989 cri.go:89] found id: ""
	I0924 01:07:46.499123   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.499134   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:46.499142   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:46.499196   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:46.532976   61989 cri.go:89] found id: ""
	I0924 01:07:46.533006   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.533017   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:46.533025   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:46.533083   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:46.565488   61989 cri.go:89] found id: ""
	I0924 01:07:46.565523   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.565534   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:46.565546   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:46.565610   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:46.598457   61989 cri.go:89] found id: ""
	I0924 01:07:46.598486   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.598496   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:46.598503   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:46.598551   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:46.631892   61989 cri.go:89] found id: ""
	I0924 01:07:46.631920   61989 logs.go:276] 0 containers: []
	W0924 01:07:46.631931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:46.631941   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:46.631956   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:46.709966   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:46.710013   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:46.749154   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:46.749184   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:46.798192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:46.798228   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:46.811902   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:46.811951   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:46.885878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.386775   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:49.399324   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:49.399383   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:49.437061   61989 cri.go:89] found id: ""
	I0924 01:07:49.437092   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.437104   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:49.437111   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:49.437160   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:49.470882   61989 cri.go:89] found id: ""
	I0924 01:07:49.470908   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.470919   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:49.470927   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:49.470989   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:49.506894   61989 cri.go:89] found id: ""
	I0924 01:07:49.506926   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.506938   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:49.506947   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:49.507018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:49.540768   61989 cri.go:89] found id: ""
	I0924 01:07:49.540800   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.540813   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:49.540822   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:49.540888   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:49.576486   61989 cri.go:89] found id: ""
	I0924 01:07:49.576515   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.576523   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:49.576530   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:49.576579   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:49.612456   61989 cri.go:89] found id: ""
	I0924 01:07:49.612479   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.612487   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:49.612495   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:49.612542   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:49.646085   61989 cri.go:89] found id: ""
	I0924 01:07:49.646118   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.646127   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:49.646132   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:49.646178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:49.682538   61989 cri.go:89] found id: ""
	I0924 01:07:49.682565   61989 logs.go:276] 0 containers: []
	W0924 01:07:49.682574   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:49.682583   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:49.682594   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:49.721791   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:49.721817   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:49.774842   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:49.774889   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:49.789082   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:49.789129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:49.866437   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:49.866464   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:49.866478   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:50.727581   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.027396   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.028176   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:50.407654   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.908118   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:52.445166   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:52.459060   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:52.459126   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:52.496521   61989 cri.go:89] found id: ""
	I0924 01:07:52.496550   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.496562   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:52.496571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:52.496652   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:52.533575   61989 cri.go:89] found id: ""
	I0924 01:07:52.533600   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.533608   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:52.533615   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:52.533693   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:52.571666   61989 cri.go:89] found id: ""
	I0924 01:07:52.571693   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.571703   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:52.571710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:52.571758   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:52.603929   61989 cri.go:89] found id: ""
	I0924 01:07:52.603957   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.603968   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:52.603976   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:52.604034   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:52.635581   61989 cri.go:89] found id: ""
	I0924 01:07:52.635607   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.635614   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:52.635620   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:52.635669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:52.673865   61989 cri.go:89] found id: ""
	I0924 01:07:52.673889   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.673897   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:52.673903   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:52.673953   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:52.709885   61989 cri.go:89] found id: ""
	I0924 01:07:52.709910   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.709918   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:52.709925   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:52.709986   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:52.746409   61989 cri.go:89] found id: ""
	I0924 01:07:52.746439   61989 logs.go:276] 0 containers: []
	W0924 01:07:52.746450   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:52.746461   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:52.746475   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:52.798020   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:52.798054   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:52.811940   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:52.811967   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:52.888091   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:52.888114   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:52.888129   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:52.968955   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:52.969000   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:55.507204   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:55.520581   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:55.520657   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:55.555772   61989 cri.go:89] found id: ""
	I0924 01:07:55.555809   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.555821   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:55.555828   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:55.555880   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:55.593765   61989 cri.go:89] found id: ""
	I0924 01:07:55.593791   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.593802   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:55.593808   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:55.593866   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:55.630292   61989 cri.go:89] found id: ""
	I0924 01:07:55.630325   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.630337   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:55.630344   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:55.630408   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:55.665703   61989 cri.go:89] found id: ""
	I0924 01:07:55.665730   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.665741   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:55.665748   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:55.665807   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:55.701911   61989 cri.go:89] found id: ""
	I0924 01:07:55.701938   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.701949   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:55.701957   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:55.702020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:55.734343   61989 cri.go:89] found id: ""
	I0924 01:07:55.734373   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.734385   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:55.734394   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:55.734460   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:55.768606   61989 cri.go:89] found id: ""
	I0924 01:07:55.768633   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.768645   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:55.768653   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:55.768716   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:55.800720   61989 cri.go:89] found id: ""
	I0924 01:07:55.800747   61989 logs.go:276] 0 containers: []
	W0924 01:07:55.800757   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:55.800768   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:55.800782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:55.851702   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:55.851737   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:55.865657   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:55.865687   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:55.940175   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:55.940197   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:55.940207   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:55.227954   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.228969   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:54.528417   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.529326   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:59.027653   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:55.407038   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:57.906886   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:07:56.015832   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:56.015870   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:58.557571   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:07:58.572208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:07:58.572274   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:07:58.605081   61989 cri.go:89] found id: ""
	I0924 01:07:58.605109   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.605121   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:07:58.605128   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:07:58.605185   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:07:58.641518   61989 cri.go:89] found id: ""
	I0924 01:07:58.641548   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.641559   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:07:58.641566   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:07:58.641617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:07:58.680623   61989 cri.go:89] found id: ""
	I0924 01:07:58.680653   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.680664   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:07:58.680675   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:07:58.680735   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:07:58.713658   61989 cri.go:89] found id: ""
	I0924 01:07:58.713684   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.713693   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:07:58.713700   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:07:58.713754   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:07:58.746264   61989 cri.go:89] found id: ""
	I0924 01:07:58.746298   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.746307   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:07:58.746313   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:07:58.746358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:07:58.779812   61989 cri.go:89] found id: ""
	I0924 01:07:58.779846   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.779912   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:07:58.779924   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:07:58.779984   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:07:58.813203   61989 cri.go:89] found id: ""
	I0924 01:07:58.813236   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.813245   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:07:58.813252   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:07:58.813303   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:07:58.845872   61989 cri.go:89] found id: ""
	I0924 01:07:58.845898   61989 logs.go:276] 0 containers: []
	W0924 01:07:58.845906   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:07:58.845915   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:07:58.845925   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:07:58.897480   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:07:58.897515   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:07:58.912904   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:07:58.912936   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:07:58.982882   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:07:58.982908   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:07:58.982921   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:07:59.058495   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:07:59.058535   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:07:59.729215   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.228358   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.028678   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:03.527682   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:00.407897   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:02.907608   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:04.907717   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:01.596672   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:01.609550   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:01.609625   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:01.648819   61989 cri.go:89] found id: ""
	I0924 01:08:01.648847   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.648857   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:01.648864   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:01.649000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:01.685419   61989 cri.go:89] found id: ""
	I0924 01:08:01.685450   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.685458   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:01.685464   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:01.685533   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:01.720426   61989 cri.go:89] found id: ""
	I0924 01:08:01.720455   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.720464   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:01.720473   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:01.720537   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:01.755292   61989 cri.go:89] found id: ""
	I0924 01:08:01.755316   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.755324   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:01.755331   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:01.755398   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:01.788673   61989 cri.go:89] found id: ""
	I0924 01:08:01.788703   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.788713   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:01.788721   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:01.788789   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:01.824724   61989 cri.go:89] found id: ""
	I0924 01:08:01.824761   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.824773   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:01.824781   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:01.824838   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:01.858492   61989 cri.go:89] found id: ""
	I0924 01:08:01.858531   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.858542   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:01.858556   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:01.858623   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:01.892135   61989 cri.go:89] found id: ""
	I0924 01:08:01.892167   61989 logs.go:276] 0 containers: []
	W0924 01:08:01.892177   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:01.892192   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:01.892205   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:01.905820   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:01.905849   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:01.977998   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:01.978026   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:01.978039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:02.060441   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:02.060480   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:02.100029   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:02.100057   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.653124   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:04.665726   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:04.665784   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:04.700755   61989 cri.go:89] found id: ""
	I0924 01:08:04.700785   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.700796   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:04.700804   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:04.700858   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:04.736955   61989 cri.go:89] found id: ""
	I0924 01:08:04.736983   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.736992   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:04.736998   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:04.737051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:04.770940   61989 cri.go:89] found id: ""
	I0924 01:08:04.770969   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.770977   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:04.770983   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:04.771051   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:04.805376   61989 cri.go:89] found id: ""
	I0924 01:08:04.805403   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.805411   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:04.805417   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:04.805471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:04.840995   61989 cri.go:89] found id: ""
	I0924 01:08:04.841016   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.841024   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:04.841030   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:04.841077   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:04.875418   61989 cri.go:89] found id: ""
	I0924 01:08:04.875449   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.875460   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:04.875468   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:04.875546   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:04.910675   61989 cri.go:89] found id: ""
	I0924 01:08:04.910696   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.910704   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:04.910710   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:04.910764   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:04.945531   61989 cri.go:89] found id: ""
	I0924 01:08:04.945562   61989 logs.go:276] 0 containers: []
	W0924 01:08:04.945570   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:04.945578   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:04.945589   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:04.997696   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:04.997734   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:05.011296   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:05.011329   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:05.087878   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:05.087905   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:05.087919   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:05.164073   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:05.164111   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:04.228985   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.734525   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.031377   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:08.528160   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:06.908017   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:09.407255   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:07.713496   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:07.726590   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:07.726649   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:07.760050   61989 cri.go:89] found id: ""
	I0924 01:08:07.760081   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.760092   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:07.760100   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:07.760152   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:07.797709   61989 cri.go:89] found id: ""
	I0924 01:08:07.797736   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.797744   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:07.797749   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:07.797803   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:07.836351   61989 cri.go:89] found id: ""
	I0924 01:08:07.836380   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.836391   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:07.836399   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:07.836471   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:07.871133   61989 cri.go:89] found id: ""
	I0924 01:08:07.871159   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.871170   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:07.871178   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:07.871229   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:07.906640   61989 cri.go:89] found id: ""
	I0924 01:08:07.906663   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.906673   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:07.906682   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:07.906741   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:07.940919   61989 cri.go:89] found id: ""
	I0924 01:08:07.940945   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.940953   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:07.940959   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:07.941018   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:07.975533   61989 cri.go:89] found id: ""
	I0924 01:08:07.975562   61989 logs.go:276] 0 containers: []
	W0924 01:08:07.975570   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:07.975576   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:07.975627   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:08.009137   61989 cri.go:89] found id: ""
	I0924 01:08:08.009163   61989 logs.go:276] 0 containers: []
	W0924 01:08:08.009173   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:08.009183   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:08.009196   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:08.065199   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:08.065252   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:08.080159   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:08.080188   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:08.154003   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:08.154025   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:08.154039   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:08.235522   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:08.235561   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:10.774666   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:10.787704   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:10.787775   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:10.822721   61989 cri.go:89] found id: ""
	I0924 01:08:10.822759   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.822770   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:10.822781   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:10.822852   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:10.857113   61989 cri.go:89] found id: ""
	I0924 01:08:10.857138   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.857146   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:10.857152   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:10.857201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:10.890974   61989 cri.go:89] found id: ""
	I0924 01:08:10.891001   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.891012   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:10.891020   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:10.891086   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:10.929771   61989 cri.go:89] found id: ""
	I0924 01:08:10.929793   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.929800   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:10.929806   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:10.929851   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:10.961988   61989 cri.go:89] found id: ""
	I0924 01:08:10.962015   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.962027   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:10.962035   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:10.962100   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:09.228600   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.729142   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.528626   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.027656   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:11.906981   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:13.907232   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:10.993591   61989 cri.go:89] found id: ""
	I0924 01:08:10.993622   61989 logs.go:276] 0 containers: []
	W0924 01:08:10.993633   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:10.993639   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:10.993691   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:11.032468   61989 cri.go:89] found id: ""
	I0924 01:08:11.032496   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.032506   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:11.032514   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:11.032576   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:11.066900   61989 cri.go:89] found id: ""
	I0924 01:08:11.066924   61989 logs.go:276] 0 containers: []
	W0924 01:08:11.066931   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:11.066939   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:11.066950   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:11.136412   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:11.136443   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:11.136459   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:11.218326   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:11.218361   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:11.260695   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:11.260728   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:11.310102   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:11.310133   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:13.825540   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:13.838208   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:13.838283   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:13.874539   61989 cri.go:89] found id: ""
	I0924 01:08:13.874567   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.874576   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:13.874581   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:13.874628   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:13.911818   61989 cri.go:89] found id: ""
	I0924 01:08:13.911839   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.911846   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:13.911852   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:13.911897   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:13.944766   61989 cri.go:89] found id: ""
	I0924 01:08:13.944789   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.944797   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:13.944802   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:13.944847   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:13.980712   61989 cri.go:89] found id: ""
	I0924 01:08:13.980742   61989 logs.go:276] 0 containers: []
	W0924 01:08:13.980752   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:13.980758   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:13.980817   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:14.016103   61989 cri.go:89] found id: ""
	I0924 01:08:14.016130   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.016138   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:14.016143   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:14.016192   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:14.051884   61989 cri.go:89] found id: ""
	I0924 01:08:14.051929   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.051943   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:14.051954   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:14.052046   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:14.088928   61989 cri.go:89] found id: ""
	I0924 01:08:14.088954   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.088964   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:14.088970   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:14.089020   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:14.123057   61989 cri.go:89] found id: ""
	I0924 01:08:14.123083   61989 logs.go:276] 0 containers: []
	W0924 01:08:14.123091   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:14.123099   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:14.123112   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:14.174249   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:14.174287   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:14.188409   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:14.188442   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:14.258906   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:14.258932   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:14.258942   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:14.340891   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:14.340928   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:14.229459   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:16.728316   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.028158   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.527615   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:15.907490   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:17.907845   61323 pod_ready.go:103] pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.901512   61323 pod_ready.go:82] duration metric: took 4m0.001092501s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:19.901552   61323 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pc28v" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:08:19.901576   61323 pod_ready.go:39] duration metric: took 4m10.04955154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:19.901606   61323 kubeadm.go:597] duration metric: took 4m18.184472182s to restartPrimaryControlPlane
	W0924 01:08:19.901701   61323 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:19.901736   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:16.877728   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:16.890548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:16.890617   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:16.924414   61989 cri.go:89] found id: ""
	I0924 01:08:16.924439   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.924451   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:16.924458   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:16.924510   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:16.960295   61989 cri.go:89] found id: ""
	I0924 01:08:16.960323   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.960344   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:16.960352   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:16.960405   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:16.993171   61989 cri.go:89] found id: ""
	I0924 01:08:16.993204   61989 logs.go:276] 0 containers: []
	W0924 01:08:16.993216   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:16.993224   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:16.993287   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:17.028122   61989 cri.go:89] found id: ""
	I0924 01:08:17.028150   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.028160   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:17.028169   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:17.028261   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:17.068401   61989 cri.go:89] found id: ""
	I0924 01:08:17.068440   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.068451   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:17.068458   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:17.068530   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:17.104250   61989 cri.go:89] found id: ""
	I0924 01:08:17.104275   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.104283   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:17.104299   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:17.104370   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:17.139178   61989 cri.go:89] found id: ""
	I0924 01:08:17.139201   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.139209   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:17.139215   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:17.139288   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:17.172677   61989 cri.go:89] found id: ""
	I0924 01:08:17.172703   61989 logs.go:276] 0 containers: []
	W0924 01:08:17.172712   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:17.172727   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:17.172742   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:17.222039   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:17.222082   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:17.235342   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:17.235371   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:17.300313   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:17.300350   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:17.300366   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:17.382465   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:17.382517   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.924928   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:19.941406   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:19.941496   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:19.976196   61989 cri.go:89] found id: ""
	I0924 01:08:19.976224   61989 logs.go:276] 0 containers: []
	W0924 01:08:19.976238   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:19.976247   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:19.976314   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:20.019652   61989 cri.go:89] found id: ""
	I0924 01:08:20.019680   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.019692   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:20.019699   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:20.019757   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:20.055098   61989 cri.go:89] found id: ""
	I0924 01:08:20.055123   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.055130   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:20.055135   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:20.055183   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:20.091428   61989 cri.go:89] found id: ""
	I0924 01:08:20.091458   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.091469   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:20.091476   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:20.091532   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:20.123608   61989 cri.go:89] found id: ""
	I0924 01:08:20.123642   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.123653   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:20.123678   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:20.123745   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:20.165885   61989 cri.go:89] found id: ""
	I0924 01:08:20.165913   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.165926   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:20.165934   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:20.165985   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:20.199300   61989 cri.go:89] found id: ""
	I0924 01:08:20.199329   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.199341   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:20.199348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:20.199415   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:20.237201   61989 cri.go:89] found id: ""
	I0924 01:08:20.237253   61989 logs.go:276] 0 containers: []
	W0924 01:08:20.237262   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:20.237271   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:20.237284   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:20.285008   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:20.285049   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:20.298974   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:20.299014   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:20.385765   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:20.385793   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:20.385807   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:20.460715   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:20.460752   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:19.227947   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.228448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.229022   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:19.527785   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:21.528095   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.528420   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:23.000163   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:23.014755   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:23.014828   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:23.048877   61989 cri.go:89] found id: ""
	I0924 01:08:23.048909   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.048921   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:23.048979   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:23.049049   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:23.085614   61989 cri.go:89] found id: ""
	I0924 01:08:23.085643   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.085650   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:23.085658   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:23.085718   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:23.122027   61989 cri.go:89] found id: ""
	I0924 01:08:23.122060   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.122071   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:23.122078   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:23.122136   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:23.156838   61989 cri.go:89] found id: ""
	I0924 01:08:23.156868   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.156879   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:23.156887   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:23.156947   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:23.191528   61989 cri.go:89] found id: ""
	I0924 01:08:23.191569   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.191579   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:23.191586   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:23.191651   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:23.227627   61989 cri.go:89] found id: ""
	I0924 01:08:23.227651   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.227659   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:23.227665   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:23.227709   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:23.261937   61989 cri.go:89] found id: ""
	I0924 01:08:23.261968   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.261980   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:23.261988   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:23.262039   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:23.297947   61989 cri.go:89] found id: ""
	I0924 01:08:23.297973   61989 logs.go:276] 0 containers: []
	W0924 01:08:23.297986   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:23.297997   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:23.298009   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:23.337783   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:23.337811   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:23.390767   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:23.390808   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:23.404787   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:23.404814   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:23.478768   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:23.478788   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:23.478801   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:25.728154   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.227795   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:25.529710   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:28.028153   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:26.060593   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:26.085071   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:26.085137   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:26.121785   61989 cri.go:89] found id: ""
	I0924 01:08:26.121814   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.121826   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:26.121834   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:26.121900   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:26.167942   61989 cri.go:89] found id: ""
	I0924 01:08:26.167971   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.167980   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:26.167991   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:26.168054   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:26.206461   61989 cri.go:89] found id: ""
	I0924 01:08:26.206496   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.206506   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:26.206513   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:26.206582   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:26.243094   61989 cri.go:89] found id: ""
	I0924 01:08:26.243125   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.243136   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:26.243144   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:26.243206   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:26.279303   61989 cri.go:89] found id: ""
	I0924 01:08:26.279331   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.279341   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:26.279348   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:26.279407   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:26.311840   61989 cri.go:89] found id: ""
	I0924 01:08:26.311869   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.311880   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:26.311888   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:26.311954   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:26.345994   61989 cri.go:89] found id: ""
	I0924 01:08:26.346019   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.346027   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:26.346033   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:26.346082   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:26.380570   61989 cri.go:89] found id: ""
	I0924 01:08:26.380601   61989 logs.go:276] 0 containers: []
	W0924 01:08:26.380610   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:26.380619   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:26.380630   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:26.429958   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:26.429993   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:26.443278   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:26.443312   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:26.516353   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:26.516375   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:26.516390   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:26.603310   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:26.603345   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.142531   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:29.156548   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:29.156634   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:29.191351   61989 cri.go:89] found id: ""
	I0924 01:08:29.191378   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.191389   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:29.191396   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:29.191451   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:29.232112   61989 cri.go:89] found id: ""
	I0924 01:08:29.232141   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.232152   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:29.232159   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:29.232214   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:29.266082   61989 cri.go:89] found id: ""
	I0924 01:08:29.266104   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.266112   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:29.266118   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:29.266178   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:29.299777   61989 cri.go:89] found id: ""
	I0924 01:08:29.299802   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.299812   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:29.299817   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:29.299883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:29.342709   61989 cri.go:89] found id: ""
	I0924 01:08:29.342740   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.342749   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:29.342756   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:29.342816   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:29.381255   61989 cri.go:89] found id: ""
	I0924 01:08:29.381303   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.381312   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:29.381318   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:29.381375   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:29.414998   61989 cri.go:89] found id: ""
	I0924 01:08:29.415028   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.415036   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:29.415043   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:29.415101   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:29.448553   61989 cri.go:89] found id: ""
	I0924 01:08:29.448580   61989 logs.go:276] 0 containers: []
	W0924 01:08:29.448589   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:29.448598   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:29.448608   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:29.534936   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:29.535001   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:29.573554   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:29.573584   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:29.623590   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:29.623626   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:29.636141   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:29.636167   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:29.700591   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:30.228993   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.229458   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:30.528150   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:33.029011   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:32.201184   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:32.215034   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:32.215102   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:32.250990   61989 cri.go:89] found id: ""
	I0924 01:08:32.251016   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.251026   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:32.251033   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:32.251104   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:32.284448   61989 cri.go:89] found id: ""
	I0924 01:08:32.284483   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.284494   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:32.284504   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:32.284570   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:32.317979   61989 cri.go:89] found id: ""
	I0924 01:08:32.318004   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.318015   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:32.318022   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:32.318078   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:32.352057   61989 cri.go:89] found id: ""
	I0924 01:08:32.352082   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.352093   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:32.352101   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:32.352163   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:32.385459   61989 cri.go:89] found id: ""
	I0924 01:08:32.385482   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.385490   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:32.385496   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:32.385544   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:32.421189   61989 cri.go:89] found id: ""
	I0924 01:08:32.421217   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.421227   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:32.421235   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:32.421307   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:32.464375   61989 cri.go:89] found id: ""
	I0924 01:08:32.464399   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.464406   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:32.464412   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:32.464457   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:32.512716   61989 cri.go:89] found id: ""
	I0924 01:08:32.512742   61989 logs.go:276] 0 containers: []
	W0924 01:08:32.512753   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:32.512763   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:32.512788   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:32.598271   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:32.598293   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:32.598305   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:32.674197   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:32.674233   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:32.715065   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:32.715092   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:32.767522   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:32.767565   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.281678   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:35.296302   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:35.296390   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:35.336341   61989 cri.go:89] found id: ""
	I0924 01:08:35.336370   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.336381   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:35.336397   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:35.336454   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:35.373090   61989 cri.go:89] found id: ""
	I0924 01:08:35.373118   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.373127   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:35.373135   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:35.373201   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:35.413628   61989 cri.go:89] found id: ""
	I0924 01:08:35.413660   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.413668   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:35.413674   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:35.413720   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:35.446564   61989 cri.go:89] found id: ""
	I0924 01:08:35.446592   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.446603   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:35.446610   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:35.446669   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:35.478389   61989 cri.go:89] found id: ""
	I0924 01:08:35.478424   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.478435   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:35.478444   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:35.478515   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:35.513992   61989 cri.go:89] found id: ""
	I0924 01:08:35.514015   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.514023   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:35.514029   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:35.514085   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:35.556442   61989 cri.go:89] found id: ""
	I0924 01:08:35.556471   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.556481   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:35.556489   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:35.556571   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:35.594205   61989 cri.go:89] found id: ""
	I0924 01:08:35.594228   61989 logs.go:276] 0 containers: []
	W0924 01:08:35.594236   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:35.594244   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:35.594254   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:35.637601   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:35.637640   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:35.691674   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:35.691711   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:35.705223   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:35.705261   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:35.784000   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:35.784021   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:35.784036   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:34.729064   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:37.227314   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:35.528382   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.028508   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:38.370232   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:38.383287   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:38.383358   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:38.417528   61989 cri.go:89] found id: ""
	I0924 01:08:38.417556   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.417564   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:38.417571   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:38.417619   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:38.459788   61989 cri.go:89] found id: ""
	I0924 01:08:38.459814   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.459821   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:38.459828   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:38.459883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:38.494017   61989 cri.go:89] found id: ""
	I0924 01:08:38.494050   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.494059   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:38.494065   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:38.494135   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:38.526894   61989 cri.go:89] found id: ""
	I0924 01:08:38.526924   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.526935   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:38.526942   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:38.527000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:38.563831   61989 cri.go:89] found id: ""
	I0924 01:08:38.563859   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.563876   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:38.563884   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:38.563950   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:38.596066   61989 cri.go:89] found id: ""
	I0924 01:08:38.596095   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.596106   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:38.596114   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:38.596172   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:38.630123   61989 cri.go:89] found id: ""
	I0924 01:08:38.630147   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.630157   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:38.630165   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:38.630223   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:38.664714   61989 cri.go:89] found id: ""
	I0924 01:08:38.664743   61989 logs.go:276] 0 containers: []
	W0924 01:08:38.664754   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:38.664765   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:38.664782   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:38.718770   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:38.718802   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:38.732878   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:38.732906   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:38.806441   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:38.806469   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:38.806485   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:38.884416   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:38.884456   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:39.228048   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.228574   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:40.527354   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:42.528592   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:41.423782   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:41.436827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:41.436899   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:41.468283   61989 cri.go:89] found id: ""
	I0924 01:08:41.468316   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.468342   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:41.468353   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:41.468412   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:41.504348   61989 cri.go:89] found id: ""
	I0924 01:08:41.504380   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.504402   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:41.504410   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:41.504470   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:41.544785   61989 cri.go:89] found id: ""
	I0924 01:08:41.544809   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.544818   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:41.544825   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:41.544883   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:41.582924   61989 cri.go:89] found id: ""
	I0924 01:08:41.582954   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.582965   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:41.582973   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:41.583037   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:41.618220   61989 cri.go:89] found id: ""
	I0924 01:08:41.618243   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.618253   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:41.618260   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:41.618329   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:41.653369   61989 cri.go:89] found id: ""
	I0924 01:08:41.653392   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.653400   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:41.653416   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:41.653477   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:41.687036   61989 cri.go:89] found id: ""
	I0924 01:08:41.687058   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.687069   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:41.687077   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:41.687133   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:41.720701   61989 cri.go:89] found id: ""
	I0924 01:08:41.720732   61989 logs.go:276] 0 containers: []
	W0924 01:08:41.720744   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:41.720756   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:41.720776   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:41.798436   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:41.798486   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:41.842639   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:41.842674   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:41.893053   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:41.893086   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:41.907757   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:41.907784   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:41.973466   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.474071   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:44.487057   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:44.487119   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:44.521772   61989 cri.go:89] found id: ""
	I0924 01:08:44.521813   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.521835   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:08:44.521843   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:44.521905   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:44.554928   61989 cri.go:89] found id: ""
	I0924 01:08:44.554956   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.554966   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:08:44.554977   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:44.555042   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:44.594246   61989 cri.go:89] found id: ""
	I0924 01:08:44.594279   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.594292   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:08:44.594298   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:44.594344   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:44.629779   61989 cri.go:89] found id: ""
	I0924 01:08:44.629807   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.629819   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:08:44.629827   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:44.629884   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:44.671671   61989 cri.go:89] found id: ""
	I0924 01:08:44.671694   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.671701   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:08:44.671707   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:44.671772   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:44.710875   61989 cri.go:89] found id: ""
	I0924 01:08:44.710910   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.710922   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:08:44.710931   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:44.711000   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:44.744345   61989 cri.go:89] found id: ""
	I0924 01:08:44.744381   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.744389   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:44.744395   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:08:44.744442   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:08:44.780771   61989 cri.go:89] found id: ""
	I0924 01:08:44.780797   61989 logs.go:276] 0 containers: []
	W0924 01:08:44.780804   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:08:44.780812   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:44.780824   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:44.834902   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:44.834958   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:44.848503   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:44.848540   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:08:44.923117   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:08:44.923138   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:44.923150   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:45.003806   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:08:45.003840   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.184585   61323 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.282824063s)
	I0924 01:08:46.184659   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:46.201715   61323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:46.215637   61323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:46.228701   61323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:46.228726   61323 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:46.228769   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:46.239005   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:46.239065   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:46.250336   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:46.259889   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:46.259961   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:46.271773   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.283106   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:46.283175   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:46.293325   61323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:46.306026   61323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:46.306111   61323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:46.318859   61323 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:46.373819   61323 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:08:46.373973   61323 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:46.487006   61323 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:46.487146   61323 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:46.487299   61323 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:08:46.495557   61323 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:46.497537   61323 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:46.497645   61323 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:46.497732   61323 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:46.497853   61323 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:46.497946   61323 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:46.498041   61323 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:46.498116   61323 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:46.498197   61323 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:46.498280   61323 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:46.498389   61323 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:46.498490   61323 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:46.498547   61323 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:46.498623   61323 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:46.714556   61323 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:46.815030   61323 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:08:47.011082   61323 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:47.227052   61323 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:47.488776   61323 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:47.489403   61323 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:47.491864   61323 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:43.728646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:46.234412   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029064   61699 pod_ready.go:103] pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:45.029109   61699 pod_ready.go:82] duration metric: took 4m0.007887151s for pod "metrics-server-6867b74b74-jtx6r" in "kube-system" namespace to be "Ready" ...
	E0924 01:08:45.029124   61699 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:08:45.029133   61699 pod_ready.go:39] duration metric: took 4m5.860472412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:45.029153   61699 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:08:45.029189   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:45.029267   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:45.084875   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:45.084899   61699 cri.go:89] found id: ""
	I0924 01:08:45.084907   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:45.084955   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.089534   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:45.089601   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:45.133457   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:45.133479   61699 cri.go:89] found id: ""
	I0924 01:08:45.133486   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:45.133544   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.137523   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:45.137586   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:45.173989   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.174014   61699 cri.go:89] found id: ""
	I0924 01:08:45.174023   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:45.174083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.178084   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:45.178168   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:45.215763   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:45.215790   61699 cri.go:89] found id: ""
	I0924 01:08:45.215799   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:45.215851   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.220052   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:45.220137   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:45.258186   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.258206   61699 cri.go:89] found id: ""
	I0924 01:08:45.258213   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:45.258272   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.262402   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:45.262481   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:45.299355   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.299385   61699 cri.go:89] found id: ""
	I0924 01:08:45.299397   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:45.299452   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.303404   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:45.303505   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:45.341412   61699 cri.go:89] found id: ""
	I0924 01:08:45.341438   61699 logs.go:276] 0 containers: []
	W0924 01:08:45.341446   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:45.341452   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:45.341508   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:45.377419   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:45.377450   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:45.377457   61699 cri.go:89] found id: ""
	I0924 01:08:45.377471   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:45.377539   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.381497   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:45.385182   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:45.385201   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:45.455618   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:45.455661   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:45.495007   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:45.495037   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:45.530196   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:45.530230   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:45.581671   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:45.581709   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:46.122674   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:46.122717   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:46.169928   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:46.169965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:46.184617   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:46.184645   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:46.330482   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:46.330512   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:46.382927   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:46.382965   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:46.441408   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:46.441442   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:46.484956   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:46.484985   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:46.522559   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:46.522595   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.064954   61699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:49.086621   61699 api_server.go:72] duration metric: took 4m15.650065328s to wait for apiserver process to appear ...
	I0924 01:08:49.086648   61699 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:08:49.086695   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:49.086760   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:47.541843   61989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:08:47.555428   61989 kubeadm.go:597] duration metric: took 4m2.297219084s to restartPrimaryControlPlane
	W0924 01:08:47.555528   61989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:08:47.555560   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:08:49.123410   61989 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.567825503s)
	I0924 01:08:49.123501   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:49.142686   61989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:08:49.154484   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:08:49.166734   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:08:49.166759   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:08:49.166813   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:08:49.178374   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:08:49.178517   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:08:49.188871   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:08:49.200190   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:08:49.200258   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:08:49.212895   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.225205   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:08:49.225276   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:08:49.237828   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:08:49.249686   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:08:49.249751   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:08:49.262505   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:08:49.338624   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:08:49.338712   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:08:49.509271   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:08:49.509489   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:08:49.509636   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:08:49.724434   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:08:47.494323   61323 out.go:235]   - Booting up control plane ...
	I0924 01:08:47.494449   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:47.494527   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:47.494904   61323 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:47.511889   61323 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:47.518272   61323 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:47.518343   61323 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:47.654121   61323 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:08:47.654273   61323 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:08:48.156008   61323 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075879ms
	I0924 01:08:48.156089   61323 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:08:49.726458   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:08:49.726563   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:08:49.726639   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:08:49.726737   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:08:49.726812   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:08:49.727078   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:08:49.727375   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:08:49.728123   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:08:49.729254   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:08:49.730178   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:08:49.732548   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:08:49.732604   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:08:49.732676   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:08:49.938623   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:08:50.774207   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:08:51.022535   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:08:51.148690   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:08:51.168786   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:08:51.170070   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:08:51.170150   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:08:51.342671   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:08:48.729168   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:50.729197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:52.729615   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:53.660805   61323 kubeadm.go:310] [api-check] The API server is healthy after 5.502700892s
	I0924 01:08:53.678006   61323 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:08:53.693676   61323 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:08:53.736910   61323 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:08:53.737186   61323 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-650507 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:08:53.750738   61323 kubeadm.go:310] [bootstrap-token] Using token: 62empn.zvptxpa69xtioeo1
	I0924 01:08:49.139835   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.139859   61699 cri.go:89] found id: ""
	I0924 01:08:49.139869   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:49.139920   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.144770   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:49.144896   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:49.193710   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:49.193733   61699 cri.go:89] found id: ""
	I0924 01:08:49.193743   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:49.193798   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.198090   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:49.198178   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:49.240236   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:49.240309   61699 cri.go:89] found id: ""
	I0924 01:08:49.240344   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:49.240401   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.244573   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:49.244646   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:49.290954   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:49.290998   61699 cri.go:89] found id: ""
	I0924 01:08:49.291008   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:49.291083   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.295602   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:49.295664   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:49.340871   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.340894   61699 cri.go:89] found id: ""
	I0924 01:08:49.340904   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:49.340964   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.345362   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:49.345433   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:49.387382   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.387408   61699 cri.go:89] found id: ""
	I0924 01:08:49.387418   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:49.387472   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.393388   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:49.393468   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:49.436082   61699 cri.go:89] found id: ""
	I0924 01:08:49.436107   61699 logs.go:276] 0 containers: []
	W0924 01:08:49.436119   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:49.436126   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:49.436187   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:49.490172   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:49.490197   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:49.490203   61699 cri.go:89] found id: ""
	I0924 01:08:49.490213   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:49.490273   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.495438   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:49.500506   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:49.500537   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:49.561240   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:49.561277   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:49.611765   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:49.611807   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:49.689366   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:49.689413   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:49.747233   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:49.747271   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:49.852723   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:49.852771   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:50.006274   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:50.006322   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:50.064786   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:50.064828   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:50.104831   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:50.104865   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:50.144962   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:50.144990   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:50.183923   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:50.183956   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:50.222382   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:50.222414   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:50.671849   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:50.671890   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.187450   61699 api_server.go:253] Checking apiserver healthz at https://192.168.61.186:8444/healthz ...
	I0924 01:08:53.193094   61699 api_server.go:279] https://192.168.61.186:8444/healthz returned 200:
	ok
	I0924 01:08:53.194414   61699 api_server.go:141] control plane version: v1.31.1
	I0924 01:08:53.194439   61699 api_server.go:131] duration metric: took 4.107783011s to wait for apiserver health ...
	I0924 01:08:53.194449   61699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:08:53.194479   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:08:53.194546   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:08:53.232560   61699 cri.go:89] found id: "306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:53.232584   61699 cri.go:89] found id: ""
	I0924 01:08:53.232594   61699 logs.go:276] 1 containers: [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7]
	I0924 01:08:53.232649   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.236611   61699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:08:53.236671   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:08:53.278194   61699 cri.go:89] found id: "2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.278223   61699 cri.go:89] found id: ""
	I0924 01:08:53.278233   61699 logs.go:276] 1 containers: [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2]
	I0924 01:08:53.278291   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.283330   61699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:08:53.283391   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:08:53.322371   61699 cri.go:89] found id: "ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.322399   61699 cri.go:89] found id: ""
	I0924 01:08:53.322408   61699 logs.go:276] 1 containers: [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f]
	I0924 01:08:53.322459   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.326794   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:08:53.326869   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:08:53.364035   61699 cri.go:89] found id: "58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.364064   61699 cri.go:89] found id: ""
	I0924 01:08:53.364075   61699 logs.go:276] 1 containers: [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f]
	I0924 01:08:53.364140   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.368065   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:08:53.368127   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:08:53.405651   61699 cri.go:89] found id: "f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.405679   61699 cri.go:89] found id: ""
	I0924 01:08:53.405687   61699 logs.go:276] 1 containers: [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc]
	I0924 01:08:53.405754   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.410451   61699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:08:53.410537   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:08:53.451079   61699 cri.go:89] found id: "55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:53.451111   61699 cri.go:89] found id: ""
	I0924 01:08:53.451121   61699 logs.go:276] 1 containers: [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba]
	I0924 01:08:53.451183   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.456272   61699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:08:53.456367   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:08:53.497323   61699 cri.go:89] found id: ""
	I0924 01:08:53.497360   61699 logs.go:276] 0 containers: []
	W0924 01:08:53.497373   61699 logs.go:278] No container was found matching "kindnet"
	I0924 01:08:53.497387   61699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:08:53.497461   61699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:08:53.536017   61699 cri.go:89] found id: "7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:53.536040   61699 cri.go:89] found id: "e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:53.536046   61699 cri.go:89] found id: ""
	I0924 01:08:53.536055   61699 logs.go:276] 2 containers: [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559]
	I0924 01:08:53.536114   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.542413   61699 ssh_runner.go:195] Run: which crictl
	I0924 01:08:53.546559   61699 logs.go:123] Gathering logs for dmesg ...
	I0924 01:08:53.546592   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:08:53.560292   61699 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:08:53.560323   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:08:53.684820   61699 logs.go:123] Gathering logs for etcd [2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2] ...
	I0924 01:08:53.684849   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c9f89868c713eaa1654e5870be979b5497645b58484e8aae4e682116ec840c2"
	I0924 01:08:53.734483   61699 logs.go:123] Gathering logs for coredns [ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f] ...
	I0924 01:08:53.734519   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddbd1006bd609e6a0f91f69b8dab8cb13d13321a96faf5ddc12fd0078ba2975f"
	I0924 01:08:53.780676   61699 logs.go:123] Gathering logs for kubelet ...
	I0924 01:08:53.780705   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:08:53.855917   61699 logs.go:123] Gathering logs for kube-scheduler [58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f] ...
	I0924 01:08:53.855960   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 58d05b91989bd9714e3f8243212e7378fe03c284e53299d41ff8cc250515754f"
	I0924 01:08:53.906926   61699 logs.go:123] Gathering logs for kube-proxy [f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc] ...
	I0924 01:08:53.906962   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f31b7aed1cdf7d2387235d153e0d9fd925a6c084a0a1e1dafc7d5872b83f25cc"
	I0924 01:08:53.953992   61699 logs.go:123] Gathering logs for kube-controller-manager [55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba] ...
	I0924 01:08:53.954019   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55e01b5780ebec9ec196f5eecf1a725b63377fc28fd298855a347c2ce07231ba"
	I0924 01:08:54.031302   61699 logs.go:123] Gathering logs for storage-provisioner [7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47] ...
	I0924 01:08:54.031350   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b621e1c0feb54fb2667f228fb9e143219c1301c554ec7d6c38da228783b6c47"
	I0924 01:08:54.073918   61699 logs.go:123] Gathering logs for storage-provisioner [e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559] ...
	I0924 01:08:54.073958   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76f05331da2ee708cc8cd6a54cbd76c5ae4310fbf75eb6721c71ac564b7a559"
	I0924 01:08:54.108724   61699 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:08:54.108765   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:08:53.752460   61323 out.go:235]   - Configuring RBAC rules ...
	I0924 01:08:53.752626   61323 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:08:53.758889   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:08:53.767101   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:08:53.770943   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:08:53.775335   61323 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:08:53.792963   61323 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:08:54.070193   61323 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:08:54.526226   61323 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:08:55.069668   61323 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:08:55.070678   61323 kubeadm.go:310] 
	I0924 01:08:55.070751   61323 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:08:55.070761   61323 kubeadm.go:310] 
	I0924 01:08:55.070844   61323 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:08:55.070860   61323 kubeadm.go:310] 
	I0924 01:08:55.070910   61323 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:08:55.070998   61323 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:08:55.071064   61323 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:08:55.071074   61323 kubeadm.go:310] 
	I0924 01:08:55.071138   61323 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:08:55.071159   61323 kubeadm.go:310] 
	I0924 01:08:55.071210   61323 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:08:55.071217   61323 kubeadm.go:310] 
	I0924 01:08:55.071298   61323 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:08:55.071428   61323 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:08:55.071525   61323 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:08:55.071535   61323 kubeadm.go:310] 
	I0924 01:08:55.071640   61323 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:08:55.071721   61323 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:08:55.071738   61323 kubeadm.go:310] 
	I0924 01:08:55.071815   61323 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.071941   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:08:55.071971   61323 kubeadm.go:310] 	--control-plane 
	I0924 01:08:55.071984   61323 kubeadm.go:310] 
	I0924 01:08:55.072089   61323 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:08:55.072098   61323 kubeadm.go:310] 
	I0924 01:08:55.072198   61323 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 62empn.zvptxpa69xtioeo1 \
	I0924 01:08:55.072324   61323 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:08:55.073807   61323 kubeadm.go:310] W0924 01:08:46.350959    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074118   61323 kubeadm.go:310] W0924 01:08:46.352529    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:08:55.074256   61323 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:08:55.074295   61323 cni.go:84] Creating CNI manager for ""
	I0924 01:08:55.074312   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:08:55.076241   61323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:08:55.077630   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:08:55.088658   61323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:08:55.106396   61323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:08:55.106491   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.106579   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-650507 minikube.k8s.io/updated_at=2024_09_24T01_08_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=embed-certs-650507 minikube.k8s.io/primary=true
	I0924 01:08:55.138376   61323 ops.go:34] apiserver oom_adj: -16
	I0924 01:08:51.344458   61989 out.go:235]   - Booting up control plane ...
	I0924 01:08:51.344607   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:08:51.353468   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:08:51.356949   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:08:51.358082   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:08:51.364468   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:08:54.501805   61699 logs.go:123] Gathering logs for container status ...
	I0924 01:08:54.501847   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:08:54.548768   61699 logs.go:123] Gathering logs for kube-apiserver [306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7] ...
	I0924 01:08:54.548800   61699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 306da3fd311af2064a344aac7a033864b4d81b50c1e51a557910de781873e7d7"
	I0924 01:08:57.105661   61699 system_pods.go:59] 8 kube-system pods found
	I0924 01:08:57.105688   61699 system_pods.go:61] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.105693   61699 system_pods.go:61] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.105697   61699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.105703   61699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.105706   61699 system_pods.go:61] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.105709   61699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.105715   61699 system_pods.go:61] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.105722   61699 system_pods.go:61] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.105729   61699 system_pods.go:74] duration metric: took 3.911274774s to wait for pod list to return data ...
	I0924 01:08:57.105736   61699 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:08:57.108031   61699 default_sa.go:45] found service account: "default"
	I0924 01:08:57.108051   61699 default_sa.go:55] duration metric: took 2.307712ms for default service account to be created ...
	I0924 01:08:57.108059   61699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:08:57.112551   61699 system_pods.go:86] 8 kube-system pods found
	I0924 01:08:57.112578   61699 system_pods.go:89] "coredns-7c65d6cfc9-xxdh2" [297fe292-94bf-468d-9e34-089c4a87429b] Running
	I0924 01:08:57.112584   61699 system_pods.go:89] "etcd-default-k8s-diff-port-465341" [3bd68a1c-e928-40f0-927f-3cde2198cace] Running
	I0924 01:08:57.112589   61699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-465341" [0a195b76-82ba-4d99-b5a3-ba918ab0b83d] Running
	I0924 01:08:57.112593   61699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-465341" [9d445611-60f3-4113-bc92-ea8df37ca2f5] Running
	I0924 01:08:57.112597   61699 system_pods.go:89] "kube-proxy-nf8mp" [cdef3aea-b1a8-438b-994f-c3212def9aea] Running
	I0924 01:08:57.112600   61699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-465341" [4ff703b1-44cd-421a-891c-9f1e5d799026] Running
	I0924 01:08:57.112608   61699 system_pods.go:89] "metrics-server-6867b74b74-jtx6r" [d83599a7-f77d-4fbb-b76f-67d33c60b4a6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:08:57.112613   61699 system_pods.go:89] "storage-provisioner" [b09ad6ef-7517-4de2-a70c-83876efd804e] Running
	I0924 01:08:57.112619   61699 system_pods.go:126] duration metric: took 4.555185ms to wait for k8s-apps to be running ...
	I0924 01:08:57.112625   61699 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:08:57.112665   61699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:08:57.127805   61699 system_svc.go:56] duration metric: took 15.170368ms WaitForService to wait for kubelet
	I0924 01:08:57.127839   61699 kubeadm.go:582] duration metric: took 4m23.691287248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:08:57.127865   61699 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:08:57.130964   61699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:08:57.130994   61699 node_conditions.go:123] node cpu capacity is 2
	I0924 01:08:57.131008   61699 node_conditions.go:105] duration metric: took 3.13793ms to run NodePressure ...
	I0924 01:08:57.131021   61699 start.go:241] waiting for startup goroutines ...
	I0924 01:08:57.131029   61699 start.go:246] waiting for cluster config update ...
	I0924 01:08:57.131043   61699 start.go:255] writing updated cluster config ...
	I0924 01:08:57.131388   61699 ssh_runner.go:195] Run: rm -f paused
	I0924 01:08:57.182238   61699 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:08:57.185023   61699 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-465341" cluster and "default" namespace by default
	I0924 01:08:55.229370   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:57.729448   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:08:55.285390   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:55.785813   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.285570   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:56.785779   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.285599   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:57.786401   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.285583   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:58.786037   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.286404   61323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:08:59.447075   61323 kubeadm.go:1113] duration metric: took 4.340646509s to wait for elevateKubeSystemPrivileges
	I0924 01:08:59.447119   61323 kubeadm.go:394] duration metric: took 4m57.777127509s to StartCluster
	I0924 01:08:59.447141   61323 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.447229   61323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:08:59.449766   61323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:08:59.450091   61323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:08:59.450191   61323 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:08:59.450308   61323 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-650507"
	I0924 01:08:59.450330   61323 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-650507"
	W0924 01:08:59.450343   61323 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:08:59.450346   61323 addons.go:69] Setting metrics-server=true in profile "embed-certs-650507"
	I0924 01:08:59.450349   61323 addons.go:69] Setting default-storageclass=true in profile "embed-certs-650507"
	I0924 01:08:59.450366   61323 addons.go:234] Setting addon metrics-server=true in "embed-certs-650507"
	W0924 01:08:59.450374   61323 addons.go:243] addon metrics-server should already be in state true
	I0924 01:08:59.450328   61323 config.go:182] Loaded profile config "embed-certs-650507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:08:59.450381   61323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-650507"
	I0924 01:08:59.450404   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450375   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.450718   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450770   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450805   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450808   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.450895   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.450842   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.451862   61323 out.go:177] * Verifying Kubernetes components...
	I0924 01:08:59.453214   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:08:59.471878   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0924 01:08:59.472083   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0924 01:08:59.472239   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0924 01:08:59.472586   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472704   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.472988   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.473187   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473205   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473226   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473242   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473418   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.473433   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.473784   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474003   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.474116   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.474383   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474422   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.474591   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.474628   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.478726   61323 addons.go:234] Setting addon default-storageclass=true in "embed-certs-650507"
	W0924 01:08:59.478753   61323 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:08:59.478784   61323 host.go:66] Checking if "embed-certs-650507" exists ...
	I0924 01:08:59.479137   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.479186   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.495021   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I0924 01:08:59.495527   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.496068   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.496090   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.496519   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.496734   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.498472   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.498528   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0924 01:08:59.498971   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.499485   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.499498   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.499794   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.499918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.500899   61323 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:08:59.501731   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.502154   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:08:59.502172   61323 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:08:59.502186   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.503238   61323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:08:59.504765   61323 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.504783   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:08:59.504801   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.505483   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0924 01:08:59.505882   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.506386   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.506408   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.506841   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.507463   61323 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:08:59.507505   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:08:59.511098   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511611   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.511645   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.511944   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.512127   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.512296   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.512493   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.514595   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515144   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.515173   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.515481   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.515749   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.515963   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.516100   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.529920   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0924 01:08:59.530565   61323 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:08:59.531102   61323 main.go:141] libmachine: Using API Version  1
	I0924 01:08:59.531125   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:08:59.531629   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:08:59.531918   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetState
	I0924 01:08:59.533741   61323 main.go:141] libmachine: (embed-certs-650507) Calling .DriverName
	I0924 01:08:59.533992   61323 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.534007   61323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:08:59.534026   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHHostname
	I0924 01:08:59.537032   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537488   61323 main.go:141] libmachine: (embed-certs-650507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:07:2d", ip: ""} in network mk-embed-certs-650507: {Iface:virbr1 ExpiryTime:2024-09-24 02:03:46 +0000 UTC Type:0 Mac:52:54:00:46:07:2d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:embed-certs-650507 Clientid:01:52:54:00:46:07:2d}
	I0924 01:08:59.537515   61323 main.go:141] libmachine: (embed-certs-650507) DBG | domain embed-certs-650507 has defined IP address 192.168.39.104 and MAC address 52:54:00:46:07:2d in network mk-embed-certs-650507
	I0924 01:08:59.537713   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHPort
	I0924 01:08:59.537919   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHKeyPath
	I0924 01:08:59.538074   61323 main.go:141] libmachine: (embed-certs-650507) Calling .GetSSHUsername
	I0924 01:08:59.538198   61323 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/embed-certs-650507/id_rsa Username:docker}
	I0924 01:08:59.680683   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:08:59.711414   61323 node_ready.go:35] waiting up to 6m0s for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721234   61323 node_ready.go:49] node "embed-certs-650507" has status "Ready":"True"
	I0924 01:08:59.721264   61323 node_ready.go:38] duration metric: took 9.820004ms for node "embed-certs-650507" to be "Ready" ...
	I0924 01:08:59.721275   61323 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:08:59.736353   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:08:59.831004   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:08:59.831041   61323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:08:59.871681   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:08:59.873844   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:08:59.902662   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:08:59.902691   61323 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:08:59.956425   61323 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:08:59.956454   61323 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:08:59.997902   61323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:09:01.146340   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.27245536s)
	I0924 01:09:01.146470   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146505   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146403   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.274685832s)
	I0924 01:09:01.146582   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146602   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146819   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146848   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.146868   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146875   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.146882   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.146892   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.146967   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.146990   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147007   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.147023   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.147084   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.147117   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147133   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147370   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.147392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.147378   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180574   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.180604   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.180929   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.180977   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.180986   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.207538   61323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209569759s)
	I0924 01:09:01.207600   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.207616   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.207959   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.208002   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208011   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208019   61323 main.go:141] libmachine: Making call to close driver server
	I0924 01:09:01.208028   61323 main.go:141] libmachine: (embed-certs-650507) Calling .Close
	I0924 01:09:01.208377   61323 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:09:01.208392   61323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:09:01.208402   61323 addons.go:475] Verifying addon metrics-server=true in "embed-certs-650507"
	I0924 01:09:01.208411   61323 main.go:141] libmachine: (embed-certs-650507) DBG | Closing plugin on server side
	I0924 01:09:01.210500   61323 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:08:59.731184   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:02.229737   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:01.211900   61323 addons.go:510] duration metric: took 1.761718139s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:09:01.751463   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.242260   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:04.728708   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.728878   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:06.243002   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:06.243030   61323 pod_ready.go:82] duration metric: took 6.506649267s for pod "coredns-7c65d6cfc9-7295k" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:06.243039   61323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:08.249949   61323 pod_ready.go:103] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:09.750009   61323 pod_ready.go:93] pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.750037   61323 pod_ready.go:82] duration metric: took 3.506990291s for pod "coredns-7c65d6cfc9-r6tcj" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.750049   61323 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756600   61323 pod_ready.go:93] pod "etcd-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.756626   61323 pod_ready.go:82] duration metric: took 6.570047ms for pod "etcd-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.756635   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762209   61323 pod_ready.go:93] pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.762235   61323 pod_ready.go:82] duration metric: took 5.593257ms for pod "kube-apiserver-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.762248   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772052   61323 pod_ready.go:93] pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.772075   61323 pod_ready.go:82] duration metric: took 9.818627ms for pod "kube-controller-manager-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.772088   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777733   61323 pod_ready.go:93] pod "kube-proxy-mwtkg" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:09.777765   61323 pod_ready.go:82] duration metric: took 5.669609ms for pod "kube-proxy-mwtkg" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:09.777778   61323 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146804   61323 pod_ready.go:93] pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace has status "Ready":"True"
	I0924 01:09:10.146833   61323 pod_ready.go:82] duration metric: took 369.046066ms for pod "kube-scheduler-embed-certs-650507" in "kube-system" namespace to be "Ready" ...
	I0924 01:09:10.146844   61323 pod_ready.go:39] duration metric: took 10.425557831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:09:10.146861   61323 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:09:10.146918   61323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:09:10.162335   61323 api_server.go:72] duration metric: took 10.712204486s to wait for apiserver process to appear ...
	I0924 01:09:10.162360   61323 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:09:10.162381   61323 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I0924 01:09:10.166693   61323 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I0924 01:09:10.167700   61323 api_server.go:141] control plane version: v1.31.1
	I0924 01:09:10.167723   61323 api_server.go:131] duration metric: took 5.355716ms to wait for apiserver health ...
	I0924 01:09:10.167734   61323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:09:10.351584   61323 system_pods.go:59] 9 kube-system pods found
	I0924 01:09:10.351621   61323 system_pods.go:61] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.351629   61323 system_pods.go:61] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.351634   61323 system_pods.go:61] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.351640   61323 system_pods.go:61] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.351645   61323 system_pods.go:61] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.351650   61323 system_pods.go:61] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.351655   61323 system_pods.go:61] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.351669   61323 system_pods.go:61] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.351678   61323 system_pods.go:61] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.351692   61323 system_pods.go:74] duration metric: took 183.950994ms to wait for pod list to return data ...
	I0924 01:09:10.351704   61323 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:09:10.547564   61323 default_sa.go:45] found service account: "default"
	I0924 01:09:10.547595   61323 default_sa.go:55] duration metric: took 195.882659ms for default service account to be created ...
	I0924 01:09:10.547605   61323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:09:10.750290   61323 system_pods.go:86] 9 kube-system pods found
	I0924 01:09:10.750327   61323 system_pods.go:89] "coredns-7c65d6cfc9-7295k" [3261d435-8cb5-4712-8459-26ba766e88e0] Running
	I0924 01:09:10.750336   61323 system_pods.go:89] "coredns-7c65d6cfc9-r6tcj" [df80e9b5-4b43-4b8f-992e-8813ceca39fe] Running
	I0924 01:09:10.750344   61323 system_pods.go:89] "etcd-embed-certs-650507" [1d21c395-ebec-4895-a1b6-11e35c799698] Running
	I0924 01:09:10.750352   61323 system_pods.go:89] "kube-apiserver-embed-certs-650507" [f7f13b75-3ed1-4e04-857f-27e71258ffd7] Running
	I0924 01:09:10.750359   61323 system_pods.go:89] "kube-controller-manager-embed-certs-650507" [4e68c892-06b6-49f1-adab-25c569f95a9a] Running
	I0924 01:09:10.750366   61323 system_pods.go:89] "kube-proxy-mwtkg" [6a893121-8161-4fbc-bb59-1e08483e82b8] Running
	I0924 01:09:10.750372   61323 system_pods.go:89] "kube-scheduler-embed-certs-650507" [bacd126d-7f4f-460b-85c5-17721247d5a5] Running
	I0924 01:09:10.750382   61323 system_pods.go:89] "metrics-server-6867b74b74-lbm9h" [fa504c09-2e16-4a5f-b4b3-a47f0733333d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:09:10.750391   61323 system_pods.go:89] "storage-provisioner" [364a4d4a-7316-48d0-a3e1-1dedff564dfb] Running
	I0924 01:09:10.750407   61323 system_pods.go:126] duration metric: took 202.795975ms to wait for k8s-apps to be running ...
	I0924 01:09:10.750416   61323 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:09:10.750476   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:09:10.765539   61323 system_svc.go:56] duration metric: took 15.112281ms WaitForService to wait for kubelet
	I0924 01:09:10.765569   61323 kubeadm.go:582] duration metric: took 11.31544538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:09:10.765588   61323 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:09:10.947628   61323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:09:10.947654   61323 node_conditions.go:123] node cpu capacity is 2
	I0924 01:09:10.947664   61323 node_conditions.go:105] duration metric: took 182.072269ms to run NodePressure ...
	I0924 01:09:10.947674   61323 start.go:241] waiting for startup goroutines ...
	I0924 01:09:10.947681   61323 start.go:246] waiting for cluster config update ...
	I0924 01:09:10.947691   61323 start.go:255] writing updated cluster config ...
	I0924 01:09:10.947955   61323 ssh_runner.go:195] Run: rm -f paused
	I0924 01:09:10.999208   61323 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:09:11.001392   61323 out.go:177] * Done! kubectl is now configured to use "embed-certs-650507" cluster and "default" namespace by default
	I0924 01:09:08.729391   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:11.229036   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:13.727852   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:16.229362   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:18.727643   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:20.729183   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:22.731323   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:25.228514   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:27.728747   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:29.729150   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:32.228197   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:31.365725   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:09:31.366444   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:31.366704   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:34.729441   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:37.228766   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:36.367209   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:36.367654   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:39.728035   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:41.729148   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:43.729240   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.228006   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:48.228134   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:46.367945   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:09:46.368128   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:09:50.228455   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:52.228646   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:54.229158   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:56.727712   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:09:58.728522   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:00.728964   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:02.729909   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:05.227781   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:07.228729   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:06.368912   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:06.369182   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:09.728977   61070 pod_ready.go:103] pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:10.222284   61070 pod_ready.go:82] duration metric: took 4m0.000274516s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:10:10.222354   61070 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7gbnr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0924 01:10:10.222381   61070 pod_ready.go:39] duration metric: took 4m12.043944079s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:10.222412   61070 kubeadm.go:597] duration metric: took 4m56.454037737s to restartPrimaryControlPlane
	W0924 01:10:10.222488   61070 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 01:10:10.222536   61070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:36.533302   61070 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.310734731s)
	I0924 01:10:36.533377   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:36.556961   61070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 01:10:36.568298   61070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:36.584098   61070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:36.584121   61070 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:36.584178   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:36.594153   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:36.594218   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:36.612646   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:36.626433   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:36.626506   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:36.636161   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.654017   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:36.654075   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:36.663760   61070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:36.673737   61070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:36.673799   61070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:36.684005   61070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:36.731568   61070 kubeadm.go:310] W0924 01:10:36.713557    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.733592   61070 kubeadm.go:310] W0924 01:10:36.715675    3094 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 01:10:36.850767   61070 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:45.349295   61070 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 01:10:45.349386   61070 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:45.349486   61070 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:45.349600   61070 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:45.349733   61070 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 01:10:45.349836   61070 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:45.351746   61070 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:45.351843   61070 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:45.351939   61070 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:45.352055   61070 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:45.352160   61070 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:45.352228   61070 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:45.352297   61070 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:45.352392   61070 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:45.352477   61070 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:45.352551   61070 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:45.352664   61070 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:45.352734   61070 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:45.352904   61070 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:45.352956   61070 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:45.353038   61070 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 01:10:45.353127   61070 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:45.353209   61070 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:45.353300   61070 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:45.353372   61070 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:45.353446   61070 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.354948   61070 out.go:235]   - Booting up control plane ...
	I0924 01:10:45.355023   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:45.355090   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:45.355144   61070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:45.355226   61070 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:45.355310   61070 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:45.355356   61070 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:45.355476   61070 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 01:10:45.355585   61070 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 01:10:45.355658   61070 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537437s
	I0924 01:10:45.355728   61070 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 01:10:45.355807   61070 kubeadm.go:310] [api-check] The API server is healthy after 5.002387582s
	I0924 01:10:45.355955   61070 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 01:10:45.356129   61070 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 01:10:45.356230   61070 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 01:10:45.356516   61070 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-674057 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 01:10:45.356571   61070 kubeadm.go:310] [bootstrap-token] Using token: g2v97n.iz49hjb4wh5cfkiq
	I0924 01:10:45.358203   61070 out.go:235]   - Configuring RBAC rules ...
	I0924 01:10:45.358333   61070 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 01:10:45.358439   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 01:10:45.358562   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 01:10:45.358667   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 01:10:45.358773   61070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 01:10:45.358851   61070 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 01:10:45.358997   61070 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 01:10:45.359059   61070 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 01:10:45.359101   61070 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 01:10:45.359111   61070 kubeadm.go:310] 
	I0924 01:10:45.359164   61070 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 01:10:45.359171   61070 kubeadm.go:310] 
	I0924 01:10:45.359263   61070 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 01:10:45.359280   61070 kubeadm.go:310] 
	I0924 01:10:45.359309   61070 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 01:10:45.359387   61070 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 01:10:45.359458   61070 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 01:10:45.359471   61070 kubeadm.go:310] 
	I0924 01:10:45.359559   61070 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 01:10:45.359568   61070 kubeadm.go:310] 
	I0924 01:10:45.359613   61070 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 01:10:45.359619   61070 kubeadm.go:310] 
	I0924 01:10:45.359665   61070 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 01:10:45.359728   61070 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 01:10:45.359800   61070 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 01:10:45.359813   61070 kubeadm.go:310] 
	I0924 01:10:45.359879   61070 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 01:10:45.359978   61070 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 01:10:45.359996   61070 kubeadm.go:310] 
	I0924 01:10:45.360101   61070 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360218   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 \
	I0924 01:10:45.360251   61070 kubeadm.go:310] 	--control-plane 
	I0924 01:10:45.360258   61070 kubeadm.go:310] 
	I0924 01:10:45.360453   61070 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 01:10:45.360481   61070 kubeadm.go:310] 
	I0924 01:10:45.360595   61070 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g2v97n.iz49hjb4wh5cfkiq \
	I0924 01:10:45.360693   61070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d4dd9f37d73db6513005d0e16c5a35989d3b102eed8cac0bf67b360d3396aa8 
	I0924 01:10:45.360706   61070 cni.go:84] Creating CNI manager for ""
	I0924 01:10:45.360713   61070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0924 01:10:45.362153   61070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 01:10:46.371109   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:10:46.371309   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:10:46.371318   61989 kubeadm.go:310] 
	I0924 01:10:46.371352   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:10:46.371455   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:10:46.371490   61989 kubeadm.go:310] 
	I0924 01:10:46.371546   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:10:46.371592   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:10:46.371734   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:10:46.371751   61989 kubeadm.go:310] 
	I0924 01:10:46.371888   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:10:46.371936   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:10:46.371978   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:10:46.371988   61989 kubeadm.go:310] 
	I0924 01:10:46.372124   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:10:46.372253   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:10:46.372262   61989 kubeadm.go:310] 
	I0924 01:10:46.372442   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:10:46.372578   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:10:46.372680   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:10:46.372756   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:10:46.372765   61989 kubeadm.go:310] 
	I0924 01:10:46.373578   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:10:46.373675   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:10:46.373790   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0924 01:10:46.373938   61989 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0924 01:10:46.373987   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0924 01:10:46.834432   61989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:46.851214   61989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 01:10:46.862648   61989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 01:10:46.862675   61989 kubeadm.go:157] found existing configuration files:
	
	I0924 01:10:46.862733   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 01:10:46.873005   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 01:10:46.873073   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 01:10:46.884007   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 01:10:46.893944   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 01:10:46.894016   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 01:10:46.905036   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.914953   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 01:10:46.915024   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 01:10:46.924881   61989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 01:10:46.935132   61989 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 01:10:46.935192   61989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 01:10:46.945837   61989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 01:10:47.018713   61989 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0924 01:10:47.018861   61989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 01:10:47.159920   61989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 01:10:47.160042   61989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 01:10:47.160168   61989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 01:10:47.349360   61989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 01:10:47.351645   61989 out.go:235]   - Generating certificates and keys ...
	I0924 01:10:47.351763   61989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 01:10:47.351918   61989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 01:10:47.352040   61989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 01:10:47.352118   61989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 01:10:47.352205   61989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 01:10:47.352298   61989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 01:10:47.352401   61989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 01:10:47.352481   61989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 01:10:47.352574   61989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 01:10:47.352662   61989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 01:10:47.352705   61989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 01:10:47.352767   61989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 01:10:47.467301   61989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 01:10:47.622085   61989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 01:10:47.726807   61989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 01:10:47.951249   61989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 01:10:47.973392   61989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 01:10:47.974396   61989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 01:10:47.974440   61989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 01:10:48.127629   61989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 01:10:45.363348   61070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 01:10:45.374505   61070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 01:10:45.391838   61070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 01:10:45.391947   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:45.391999   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-674057 minikube.k8s.io/updated_at=2024_09_24T01_10_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=no-preload-674057 minikube.k8s.io/primary=true
	I0924 01:10:45.583482   61070 ops.go:34] apiserver oom_adj: -16
	I0924 01:10:45.583498   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.083831   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:46.583990   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.084184   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:47.583925   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.083775   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:48.583654   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.084305   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:49.584636   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.084620   61070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 01:10:50.226320   61070 kubeadm.go:1113] duration metric: took 4.834429832s to wait for elevateKubeSystemPrivileges
	I0924 01:10:50.226363   61070 kubeadm.go:394] duration metric: took 5m36.514145334s to StartCluster
	I0924 01:10:50.226386   61070 settings.go:142] acquiring lock: {Name:mk2498060bfcc1414033a486e76a223de88ed325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.226480   61070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 01:10:50.229196   61070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/kubeconfig: {Name:mk3b29105ccc2e600682c419fcfd45ce5d22b5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:10:50.229530   61070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.161 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0924 01:10:50.229600   61070 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:10:50.229703   61070 addons.go:69] Setting storage-provisioner=true in profile "no-preload-674057"
	I0924 01:10:50.229725   61070 addons.go:234] Setting addon storage-provisioner=true in "no-preload-674057"
	W0924 01:10:50.229733   61070 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:10:50.229735   61070 addons.go:69] Setting default-storageclass=true in profile "no-preload-674057"
	I0924 01:10:50.229756   61070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-674057"
	I0924 01:10:50.229764   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.229789   61070 config.go:182] Loaded profile config "no-preload-674057": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 01:10:50.229781   61070 addons.go:69] Setting metrics-server=true in profile "no-preload-674057"
	I0924 01:10:50.229847   61070 addons.go:234] Setting addon metrics-server=true in "no-preload-674057"
	W0924 01:10:50.229855   61070 addons.go:243] addon metrics-server should already be in state true
	I0924 01:10:50.229871   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.230228   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230268   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230320   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230351   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.230355   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.230389   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.231531   61070 out.go:177] * Verifying Kubernetes components...
	I0924 01:10:50.233506   61070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:10:50.252485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0924 01:10:50.252844   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0924 01:10:50.253068   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.253217   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0924 01:10:50.253653   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.253676   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.253705   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254050   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.254203   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254236   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254250   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.254591   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.254814   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.254829   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.254851   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.254864   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.254984   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.255389   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.255983   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.256028   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.258757   61070 addons.go:234] Setting addon default-storageclass=true in "no-preload-674057"
	W0924 01:10:50.258781   61070 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:10:50.258861   61070 host.go:66] Checking if "no-preload-674057" exists ...
	I0924 01:10:50.259186   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.259237   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.276636   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0924 01:10:50.276806   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0924 01:10:50.277196   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277312   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.277771   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.277795   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278022   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.278044   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.278213   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278380   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.278485   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0924 01:10:50.278806   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.278877   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.279106   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.279244   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.279260   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.279668   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.280215   61070 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19696-7623/.minikube/bin/docker-machine-driver-kvm2
	I0924 01:10:50.280263   61070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 01:10:50.280315   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.281796   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.282123   61070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:10:50.283561   61070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:10:48.129312   61989 out.go:235]   - Booting up control plane ...
	I0924 01:10:48.129446   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 01:10:48.139821   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 01:10:48.143120   61989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 01:10:48.144038   61989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 01:10:48.146275   61989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 01:10:50.283656   61070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.283674   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:10:50.283688   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.284778   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:10:50.284793   61070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:10:50.284811   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.288106   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288477   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.288498   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288524   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.288679   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.288867   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289019   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.289185   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.289309   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.289338   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.289613   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.289773   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.289938   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.290073   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.323722   61070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0924 01:10:50.324221   61070 main.go:141] libmachine: () Calling .GetVersion
	I0924 01:10:50.324873   61070 main.go:141] libmachine: Using API Version  1
	I0924 01:10:50.324901   61070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 01:10:50.325334   61070 main.go:141] libmachine: () Calling .GetMachineName
	I0924 01:10:50.325572   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetState
	I0924 01:10:50.327779   61070 main.go:141] libmachine: (no-preload-674057) Calling .DriverName
	I0924 01:10:50.328071   61070 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.328092   61070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:10:50.328119   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHHostname
	I0924 01:10:50.331721   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.331988   61070 main.go:141] libmachine: (no-preload-674057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:7a:1a", ip: ""} in network mk-no-preload-674057: {Iface:virbr4 ExpiryTime:2024-09-24 02:04:48 +0000 UTC Type:0 Mac:52:54:00:01:7a:1a Iaid: IPaddr:192.168.50.161 Prefix:24 Hostname:no-preload-674057 Clientid:01:52:54:00:01:7a:1a}
	I0924 01:10:50.332022   61070 main.go:141] libmachine: (no-preload-674057) DBG | domain no-preload-674057 has defined IP address 192.168.50.161 and MAC address 52:54:00:01:7a:1a in network mk-no-preload-674057
	I0924 01:10:50.332209   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHPort
	I0924 01:10:50.332455   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHKeyPath
	I0924 01:10:50.332658   61070 main.go:141] libmachine: (no-preload-674057) Calling .GetSSHUsername
	I0924 01:10:50.332837   61070 sshutil.go:53] new ssh client: &{IP:192.168.50.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/no-preload-674057/id_rsa Username:docker}
	I0924 01:10:50.471507   61070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:10:50.502289   61070 node_ready.go:35] waiting up to 6m0s for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522752   61070 node_ready.go:49] node "no-preload-674057" has status "Ready":"True"
	I0924 01:10:50.522784   61070 node_ready.go:38] duration metric: took 20.46398ms for node "no-preload-674057" to be "Ready" ...
	I0924 01:10:50.522797   61070 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:50.537297   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:50.576703   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:10:50.638655   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:10:50.638679   61070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:10:50.673535   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:10:50.691443   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:10:50.691475   61070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:10:50.791572   61070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:50.791596   61070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:10:50.887143   61070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:10:51.535179   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535211   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535247   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535269   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535531   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535553   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535563   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535571   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535572   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535584   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.535591   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.535598   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.535809   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.535830   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.536069   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.536078   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.536088   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.563511   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.563537   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.563856   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.563880   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.800860   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.800889   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801192   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801211   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801224   61070 main.go:141] libmachine: Making call to close driver server
	I0924 01:10:51.801233   61070 main.go:141] libmachine: (no-preload-674057) Calling .Close
	I0924 01:10:51.801527   61070 main.go:141] libmachine: (no-preload-674057) DBG | Closing plugin on server side
	I0924 01:10:51.801559   61070 main.go:141] libmachine: Successfully made call to close driver server
	I0924 01:10:51.801567   61070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0924 01:10:51.801577   61070 addons.go:475] Verifying addon metrics-server=true in "no-preload-674057"
	I0924 01:10:51.803735   61070 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0924 01:10:51.805581   61070 addons.go:510] duration metric: took 1.575985263s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0924 01:10:52.544028   61070 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:10:53.564056   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.564089   61070 pod_ready.go:82] duration metric: took 3.026767371s for pod "coredns-7c65d6cfc9-nqwzr" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.564102   61070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573039   61070 pod_ready.go:93] pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:53.573076   61070 pod_ready.go:82] duration metric: took 8.965144ms for pod "coredns-7c65d6cfc9-x7cv6" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:53.573090   61070 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081080   61070 pod_ready.go:93] pod "etcd-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.081105   61070 pod_ready.go:82] duration metric: took 508.007072ms for pod "etcd-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.081115   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087054   61070 pod_ready.go:93] pod "kube-apiserver-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.087079   61070 pod_ready.go:82] duration metric: took 5.957569ms for pod "kube-apiserver-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.087091   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094018   61070 pod_ready.go:93] pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.094043   61070 pod_ready.go:82] duration metric: took 6.944048ms for pod "kube-controller-manager-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.094053   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341307   61070 pod_ready.go:93] pod "kube-proxy-k54d7" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.341326   61070 pod_ready.go:82] duration metric: took 247.267987ms for pod "kube-proxy-k54d7" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.341335   61070 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741702   61070 pod_ready.go:93] pod "kube-scheduler-no-preload-674057" in "kube-system" namespace has status "Ready":"True"
	I0924 01:10:54.741732   61070 pod_ready.go:82] duration metric: took 400.389532ms for pod "kube-scheduler-no-preload-674057" in "kube-system" namespace to be "Ready" ...
	I0924 01:10:54.741742   61070 pod_ready.go:39] duration metric: took 4.218931841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:10:54.741759   61070 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:10:54.741827   61070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:10:54.758692   61070 api_server.go:72] duration metric: took 4.529120201s to wait for apiserver process to appear ...
	I0924 01:10:54.758723   61070 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:10:54.758744   61070 api_server.go:253] Checking apiserver healthz at https://192.168.50.161:8443/healthz ...
	I0924 01:10:54.764587   61070 api_server.go:279] https://192.168.50.161:8443/healthz returned 200:
	ok
	I0924 01:10:54.765620   61070 api_server.go:141] control plane version: v1.31.1
	I0924 01:10:54.765639   61070 api_server.go:131] duration metric: took 6.909845ms to wait for apiserver health ...
	I0924 01:10:54.765646   61070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:10:54.945080   61070 system_pods.go:59] 9 kube-system pods found
	I0924 01:10:54.945121   61070 system_pods.go:61] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:54.945128   61070 system_pods.go:61] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:54.945134   61070 system_pods.go:61] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:54.945140   61070 system_pods.go:61] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:54.945145   61070 system_pods.go:61] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:54.945150   61070 system_pods.go:61] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:54.945161   61070 system_pods.go:61] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:54.945172   61070 system_pods.go:61] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:54.945180   61070 system_pods.go:61] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:54.945191   61070 system_pods.go:74] duration metric: took 179.539019ms to wait for pod list to return data ...
	I0924 01:10:54.945205   61070 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:10:55.141944   61070 default_sa.go:45] found service account: "default"
	I0924 01:10:55.141973   61070 default_sa.go:55] duration metric: took 196.760922ms for default service account to be created ...
	I0924 01:10:55.141984   61070 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:10:55.344235   61070 system_pods.go:86] 9 kube-system pods found
	I0924 01:10:55.344273   61070 system_pods.go:89] "coredns-7c65d6cfc9-nqwzr" [9773e4bf-9848-47d8-b87b-897fbdd22d42] Running
	I0924 01:10:55.344282   61070 system_pods.go:89] "coredns-7c65d6cfc9-x7cv6" [9e96941a-b045-48e2-be06-50cc29f8ec25] Running
	I0924 01:10:55.344288   61070 system_pods.go:89] "etcd-no-preload-674057" [3ed2a57d-06a2-4ee2-9bc0-9042c1a88d09] Running
	I0924 01:10:55.344293   61070 system_pods.go:89] "kube-apiserver-no-preload-674057" [e915c4f9-a44e-4d36-9bf4-033de2a512f2] Running
	I0924 01:10:55.344297   61070 system_pods.go:89] "kube-controller-manager-no-preload-674057" [71492ec7-1fd8-49a3-973d-b62141c7b768] Running
	I0924 01:10:55.344301   61070 system_pods.go:89] "kube-proxy-k54d7" [b67ac411-52b5-4d58-9db3-d2d92b63a21f] Running
	I0924 01:10:55.344304   61070 system_pods.go:89] "kube-scheduler-no-preload-674057" [927b2a09-4fb1-499c-a2e6-6185a88facdd] Running
	I0924 01:10:55.344310   61070 system_pods.go:89] "metrics-server-6867b74b74-w5j2x" [57fd868f-ab5c-495a-869a-45e8f81f4014] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:10:55.344315   61070 system_pods.go:89] "storage-provisioner" [341fd764-a3bd-4d28-bc6a-6ec9fa8a5347] Running
	I0924 01:10:55.344324   61070 system_pods.go:126] duration metric: took 202.334823ms to wait for k8s-apps to be running ...
	I0924 01:10:55.344361   61070 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:10:55.344406   61070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:10:55.361050   61070 system_svc.go:56] duration metric: took 16.6812ms WaitForService to wait for kubelet
	I0924 01:10:55.361082   61070 kubeadm.go:582] duration metric: took 5.13151522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:10:55.361104   61070 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:10:55.541766   61070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 01:10:55.541799   61070 node_conditions.go:123] node cpu capacity is 2
	I0924 01:10:55.541812   61070 node_conditions.go:105] duration metric: took 180.702708ms to run NodePressure ...
	I0924 01:10:55.541826   61070 start.go:241] waiting for startup goroutines ...
	I0924 01:10:55.541837   61070 start.go:246] waiting for cluster config update ...
	I0924 01:10:55.541850   61070 start.go:255] writing updated cluster config ...
	I0924 01:10:55.542100   61070 ssh_runner.go:195] Run: rm -f paused
	I0924 01:10:55.590629   61070 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:10:55.592850   61070 out.go:177] * Done! kubectl is now configured to use "no-preload-674057" cluster and "default" namespace by default
	I0924 01:11:28.148929   61989 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0924 01:11:28.149086   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:28.149360   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:33.150102   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:33.150283   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:11:43.151281   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:11:43.151540   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:03.152338   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:03.152562   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151221   61989 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0924 01:12:43.151503   61989 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0924 01:12:43.151532   61989 kubeadm.go:310] 
	I0924 01:12:43.151585   61989 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0924 01:12:43.151645   61989 kubeadm.go:310] 		timed out waiting for the condition
	I0924 01:12:43.151655   61989 kubeadm.go:310] 
	I0924 01:12:43.151729   61989 kubeadm.go:310] 	This error is likely caused by:
	I0924 01:12:43.151779   61989 kubeadm.go:310] 		- The kubelet is not running
	I0924 01:12:43.151940   61989 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0924 01:12:43.151954   61989 kubeadm.go:310] 
	I0924 01:12:43.152095   61989 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0924 01:12:43.152154   61989 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0924 01:12:43.152201   61989 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0924 01:12:43.152207   61989 kubeadm.go:310] 
	I0924 01:12:43.152294   61989 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0924 01:12:43.152411   61989 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0924 01:12:43.152424   61989 kubeadm.go:310] 
	I0924 01:12:43.152565   61989 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0924 01:12:43.152653   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0924 01:12:43.152718   61989 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0924 01:12:43.152794   61989 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0924 01:12:43.152802   61989 kubeadm.go:310] 
	I0924 01:12:43.153600   61989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 01:12:43.153699   61989 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0924 01:12:43.153757   61989 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0924 01:12:43.153808   61989 kubeadm.go:394] duration metric: took 7m57.944266289s to StartCluster
	I0924 01:12:43.153845   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:12:43.153894   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:12:43.199866   61989 cri.go:89] found id: ""
	I0924 01:12:43.199896   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.199908   61989 logs.go:278] No container was found matching "kube-apiserver"
	I0924 01:12:43.199916   61989 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0924 01:12:43.199975   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:12:43.235387   61989 cri.go:89] found id: ""
	I0924 01:12:43.235420   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.235432   61989 logs.go:278] No container was found matching "etcd"
	I0924 01:12:43.235441   61989 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0924 01:12:43.235513   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:12:43.271255   61989 cri.go:89] found id: ""
	I0924 01:12:43.271290   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.271303   61989 logs.go:278] No container was found matching "coredns"
	I0924 01:12:43.271312   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:12:43.271380   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:12:43.305842   61989 cri.go:89] found id: ""
	I0924 01:12:43.305870   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.305882   61989 logs.go:278] No container was found matching "kube-scheduler"
	I0924 01:12:43.305891   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:12:43.305952   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:12:43.341956   61989 cri.go:89] found id: ""
	I0924 01:12:43.341983   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.342005   61989 logs.go:278] No container was found matching "kube-proxy"
	I0924 01:12:43.342013   61989 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:12:43.342093   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:12:43.376362   61989 cri.go:89] found id: ""
	I0924 01:12:43.376399   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.376421   61989 logs.go:278] No container was found matching "kube-controller-manager"
	I0924 01:12:43.376431   61989 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0924 01:12:43.376487   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:12:43.409351   61989 cri.go:89] found id: ""
	I0924 01:12:43.409378   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.409387   61989 logs.go:278] No container was found matching "kindnet"
	I0924 01:12:43.409392   61989 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:12:43.409459   61989 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:12:43.442446   61989 cri.go:89] found id: ""
	I0924 01:12:43.442479   61989 logs.go:276] 0 containers: []
	W0924 01:12:43.442487   61989 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0924 01:12:43.442497   61989 logs.go:123] Gathering logs for kubelet ...
	I0924 01:12:43.442507   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 01:12:43.498980   61989 logs.go:123] Gathering logs for dmesg ...
	I0924 01:12:43.499020   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:12:43.520090   61989 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:12:43.520120   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0924 01:12:43.612212   61989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0924 01:12:43.612242   61989 logs.go:123] Gathering logs for CRI-O ...
	I0924 01:12:43.612255   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0924 01:12:43.727355   61989 logs.go:123] Gathering logs for container status ...
	I0924 01:12:43.727395   61989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0924 01:12:43.770163   61989 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0924 01:12:43.770217   61989 out.go:270] * 
	W0924 01:12:43.770282   61989 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.770297   61989 out.go:270] * 
	W0924 01:12:43.771298   61989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:12:43.775708   61989 out.go:201] 
	W0924 01:12:43.777139   61989 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0924 01:12:43.777186   61989 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0924 01:12:43.777214   61989 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0924 01:12:43.779580   61989 out.go:201] 
	
	
	==> CRI-O <==
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.493698801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141077493664940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1ac347c-99b0-410c-8c2d-fa25092aaed7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.494229989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed5d15cc-5795-49f5-a627-54246b50d890 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.494284077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed5d15cc-5795-49f5-a627-54246b50d890 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.494320225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ed5d15cc-5795-49f5-a627-54246b50d890 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.525864943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7c2452f-3f6b-4dfe-8618-50fed77c718a name=/runtime.v1.RuntimeService/Version
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.525948127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7c2452f-3f6b-4dfe-8618-50fed77c718a name=/runtime.v1.RuntimeService/Version
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.527079528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48dc45e5-2ef1-4451-8f0f-dccf50a7e80f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.527535057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141077527508172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48dc45e5-2ef1-4451-8f0f-dccf50a7e80f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.528151427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e2a07ea-5f85-4b76-9f23-c70051ee5940 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.528237711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e2a07ea-5f85-4b76-9f23-c70051ee5940 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.528277410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7e2a07ea-5f85-4b76-9f23-c70051ee5940 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.562100749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d11774a-9880-43e9-80cc-99e1c0f4cbc1 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.562185866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d11774a-9880-43e9-80cc-99e1c0f4cbc1 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.563171141Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9de8d2be-697e-47d9-9686-98fdb883becd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.563645411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141077563620852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9de8d2be-697e-47d9-9686-98fdb883becd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.564341115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f354336e-861f-4995-bb86-4fefea6fc59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.564391715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f354336e-861f-4995-bb86-4fefea6fc59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.564425946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f354336e-861f-4995-bb86-4fefea6fc59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.594417294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3593f978-a07b-4f9d-a21e-c3b8079612f9 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.594490315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3593f978-a07b-4f9d-a21e-c3b8079612f9 name=/runtime.v1.RuntimeService/Version
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.596027995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10d925a7-6e3b-4e0b-a5a4-216c20623fe3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.596422019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727141077596396527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10d925a7-6e3b-4e0b-a5a4-216c20623fe3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.596954920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=422dc70f-3b2c-4de8-baff-13c55034ced4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.597038486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=422dc70f-3b2c-4de8-baff-13c55034ced4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 24 01:24:37 old-k8s-version-171598 crio[631]: time="2024-09-24 01:24:37.597071116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=422dc70f-3b2c-4de8-baff-13c55034ced4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep24 01:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051965] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048547] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.882363] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935977] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544938] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.695614] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.066394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068035] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.210501] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.125361] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.257875] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.688915] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.058357] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.792508] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +11.354084] kauditd_printk_skb: 46 callbacks suppressed
	[Sep24 01:08] systemd-fstab-generator[5046]: Ignoring "noauto" option for root device
	[Sep24 01:10] systemd-fstab-generator[5322]: Ignoring "noauto" option for root device
	[  +0.074932] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:24:37 up 20 min,  0 users,  load average: 0.18, 0.09, 0.06
	Linux old-k8s-version-171598 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]: net.(*sysDialer).dialSerial(0xc000ce0400, 0x4f7fe40, 0xc0001d41e0, 0xc000915c90, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /usr/local/go/src/net/dial.go:548 +0x152
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]: net.(*Dialer).DialContext(0xc000be3b60, 0x4f7fe00, 0xc000110018, 0x48ab5d6, 0x3, 0xc000c0d7a0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bfba40, 0x4f7fe00, 0xc000110018, 0x48ab5d6, 0x3, 0xc000c0d7a0, 0x24, 0x60, 0x7f0c57744ca0, 0x118, ...)
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]: net/http.(*Transport).dial(0xc000192f00, 0x4f7fe00, 0xc000110018, 0x48ab5d6, 0x3, 0xc000c0d7a0, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]: net/http.(*Transport).dialConn(0xc000192f00, 0x4f7fe00, 0xc000110018, 0x0, 0xc000ce5320, 0x5, 0xc000c0d7a0, 0x24, 0x0, 0xc000bf8240, ...)
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]: net/http.(*Transport).dialConnFor(0xc000192f00, 0xc0000d4210)
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]: created by net/http.(*Transport).queueForDial
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6847]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 24 01:24:36 old-k8s-version-171598 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 24 01:24:36 old-k8s-version-171598 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 24 01:24:36 old-k8s-version-171598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 144.
	Sep 24 01:24:36 old-k8s-version-171598 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 24 01:24:36 old-k8s-version-171598 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6874]: I0924 01:24:36.827078    6874 server.go:416] Version: v1.20.0
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6874]: I0924 01:24:36.827525    6874 server.go:837] Client rotation is on, will bootstrap in background
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6874]: I0924 01:24:36.830813    6874 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6874]: W0924 01:24:36.832274    6874 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 24 01:24:36 old-k8s-version-171598 kubelet[6874]: I0924 01:24:36.832301    6874 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 2 (239.532551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-171598" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (168.18s)

                                                
                                    

Test pass (248/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.33
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 15.36
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 82
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 140.17
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 11.55
38 TestAddons/parallel/CSI 50.49
39 TestAddons/parallel/Headlamp 19.62
40 TestAddons/parallel/CloudSpanner 5.58
41 TestAddons/parallel/LocalPath 55.23
42 TestAddons/parallel/NvidiaDevicePlugin 5.59
43 TestAddons/parallel/Yakd 10.75
44 TestAddons/StoppedEnableDisable 7.55
45 TestCertOptions 70.16
46 TestCertExpiration 255.7
48 TestForceSystemdFlag 70.07
49 TestForceSystemdEnv 43.2
51 TestKVMDriverInstallOrUpdate 4.84
55 TestErrorSpam/setup 37.98
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.72
58 TestErrorSpam/pause 1.56
59 TestErrorSpam/unpause 1.78
60 TestErrorSpam/stop 6.16
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 83.78
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 54.15
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.34
72 TestFunctional/serial/CacheCmd/cache/add_local 2.1
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 31.86
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.3
83 TestFunctional/serial/LogsFileCmd 1.37
84 TestFunctional/serial/InvalidService 3.88
86 TestFunctional/parallel/ConfigCmd 0.31
87 TestFunctional/parallel/DashboardCmd 14.97
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.16
90 TestFunctional/parallel/StatusCmd 1.28
94 TestFunctional/parallel/ServiceCmdConnect 9.74
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 41.57
98 TestFunctional/parallel/SSHCmd 0.44
99 TestFunctional/parallel/CpCmd 1.31
100 TestFunctional/parallel/MySQL 30.52
101 TestFunctional/parallel/FileSync 0.23
102 TestFunctional/parallel/CertSync 1.16
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
110 TestFunctional/parallel/License 0.61
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
113 TestFunctional/parallel/MountCmd/any-port 11.61
114 TestFunctional/parallel/ProfileCmd/profile_list 0.33
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
116 TestFunctional/parallel/ServiceCmd/List 0.47
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
119 TestFunctional/parallel/ServiceCmd/Format 0.62
120 TestFunctional/parallel/ServiceCmd/URL 0.37
121 TestFunctional/parallel/MountCmd/specific-port 1.79
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.66
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.49
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 4.21
142 TestFunctional/parallel/ImageCommands/Setup 1.81
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.77
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.56
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
147 TestFunctional/parallel/ImageCommands/ImageRemove 1.79
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.07
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.13
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 205.54
157 TestMultiControlPlane/serial/DeployApp 7.11
158 TestMultiControlPlane/serial/PingHostFromPods 1.19
159 TestMultiControlPlane/serial/AddWorkerNode 58.76
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
162 TestMultiControlPlane/serial/CopyFile 12.82
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.17
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.5
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
171 TestMultiControlPlane/serial/RestartCluster 349.17
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
173 TestMultiControlPlane/serial/AddSecondaryNode 75.27
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
178 TestJSONOutput/start/Command 82.35
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.71
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.59
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 6.66
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.19
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 90.22
210 TestMountStart/serial/StartWithMountFirst 24.44
211 TestMountStart/serial/VerifyMountFirst 0.37
212 TestMountStart/serial/StartWithMountSecond 27.89
213 TestMountStart/serial/VerifyMountSecond 0.37
214 TestMountStart/serial/DeleteFirst 0.7
215 TestMountStart/serial/VerifyMountPostDelete 0.36
216 TestMountStart/serial/Stop 1.27
217 TestMountStart/serial/RestartStopped 22.91
218 TestMountStart/serial/VerifyMountPostStop 0.36
221 TestMultiNode/serial/FreshStart2Nodes 104.13
222 TestMultiNode/serial/DeployApp2Nodes 5.71
223 TestMultiNode/serial/PingHostFrom2Pods 0.77
224 TestMultiNode/serial/AddNode 47.3
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.58
227 TestMultiNode/serial/CopyFile 7.1
228 TestMultiNode/serial/StopNode 2.2
229 TestMultiNode/serial/StartAfterStop 39.43
231 TestMultiNode/serial/DeleteNode 2.01
233 TestMultiNode/serial/RestartMultiNode 176.2
234 TestMultiNode/serial/ValidateNameConflict 44.81
241 TestScheduledStopUnix 113.58
245 TestRunningBinaryUpgrade 283.37
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestPause/serial/Start 111.95
260 TestNoKubernetes/serial/StartWithK8s 91.15
261 TestNoKubernetes/serial/StartWithStopK8s 116.66
262 TestPause/serial/SecondStartNoReconfiguration 126.65
263 TestNoKubernetes/serial/Start 33.03
264 TestPause/serial/Pause 0.72
265 TestPause/serial/VerifyStatus 0.25
266 TestPause/serial/Unpause 0.65
267 TestPause/serial/PauseAgain 0.83
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
269 TestPause/serial/DeletePaused 1.01
270 TestNoKubernetes/serial/ProfileList 28.52
271 TestPause/serial/VerifyDeletedResources 14.78
272 TestNoKubernetes/serial/Stop 1.29
281 TestNetworkPlugins/group/false 3.3
285 TestStoppedBinaryUpgrade/Setup 2.33
286 TestStoppedBinaryUpgrade/Upgrade 160.05
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
291 TestStartStop/group/no-preload/serial/FirstStart 74.65
293 TestStartStop/group/embed-certs/serial/FirstStart 78.36
294 TestStartStop/group/no-preload/serial/DeployApp 11.31
295 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
297 TestStartStop/group/embed-certs/serial/DeployApp 10.32
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.61
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
307 TestStartStop/group/no-preload/serial/SecondStart 717.18
310 TestStartStop/group/embed-certs/serial/SecondStart 601.07
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 528.39
313 TestStartStop/group/old-k8s-version/serial/Stop 3.29
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
325 TestStartStop/group/newest-cni/serial/FirstStart 47.15
326 TestNetworkPlugins/group/auto/Start 82.24
327 TestNetworkPlugins/group/kindnet/Start 75.6
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
330 TestStartStop/group/newest-cni/serial/Stop 10.81
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
332 TestStartStop/group/newest-cni/serial/SecondStart 53.05
333 TestNetworkPlugins/group/auto/KubeletFlags 0.21
334 TestNetworkPlugins/group/auto/NetCatPod 12.27
335 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
339 TestStartStop/group/newest-cni/serial/Pause 4.59
340 TestNetworkPlugins/group/calico/Start 83.35
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
342 TestNetworkPlugins/group/kindnet/NetCatPod 10.32
343 TestNetworkPlugins/group/auto/DNS 0.25
344 TestNetworkPlugins/group/auto/Localhost 0.17
345 TestNetworkPlugins/group/auto/HairPin 0.17
346 TestNetworkPlugins/group/custom-flannel/Start 98.32
347 TestNetworkPlugins/group/kindnet/DNS 0.18
348 TestNetworkPlugins/group/kindnet/Localhost 0.17
349 TestNetworkPlugins/group/kindnet/HairPin 0.2
350 TestNetworkPlugins/group/enable-default-cni/Start 82.58
351 TestNetworkPlugins/group/flannel/Start 123.52
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.2
354 TestNetworkPlugins/group/calico/NetCatPod 11.24
355 TestNetworkPlugins/group/calico/DNS 0.17
356 TestNetworkPlugins/group/calico/Localhost 0.13
357 TestNetworkPlugins/group/calico/HairPin 0.12
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.23
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.25
362 TestNetworkPlugins/group/enable-default-cni/DNS 26.1
363 TestNetworkPlugins/group/custom-flannel/DNS 0.2
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
366 TestNetworkPlugins/group/bridge/Start 86.86
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
371 TestNetworkPlugins/group/flannel/NetCatPod 11.21
372 TestNetworkPlugins/group/flannel/DNS 0.15
373 TestNetworkPlugins/group/flannel/Localhost 0.16
374 TestNetworkPlugins/group/flannel/HairPin 0.14
375 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
376 TestNetworkPlugins/group/bridge/NetCatPod 10.23
377 TestNetworkPlugins/group/bridge/DNS 0.17
378 TestNetworkPlugins/group/bridge/Localhost 0.13
379 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (27.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-098425 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-098425 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.328527945s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 23:38:05.933057   14793 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0923 23:38:05.933154   14793 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-098425
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-098425: exit status 85 (64.60289ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |          |
	|         | -p download-only-098425        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:37:38
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:37:38.642940   14805 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:37:38.643049   14805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:38.643056   14805 out.go:358] Setting ErrFile to fd 2...
	I0923 23:37:38.643060   14805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:38.643219   14805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	W0923 23:37:38.643333   14805 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19696-7623/.minikube/config/config.json: open /home/jenkins/minikube-integration/19696-7623/.minikube/config/config.json: no such file or directory
	I0923 23:37:38.643894   14805 out.go:352] Setting JSON to true
	I0923 23:37:38.644843   14805 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1203,"bootTime":1727133456,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:37:38.644944   14805 start.go:139] virtualization: kvm guest
	I0923 23:37:38.647563   14805 out.go:97] [download-only-098425] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 23:37:38.647699   14805 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 23:37:38.647742   14805 notify.go:220] Checking for updates...
	I0923 23:37:38.649352   14805 out.go:169] MINIKUBE_LOCATION=19696
	I0923 23:37:38.651094   14805 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:37:38.652985   14805 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:37:38.654789   14805 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:37:38.656496   14805 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 23:37:38.659368   14805 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 23:37:38.659625   14805 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:37:38.769739   14805 out.go:97] Using the kvm2 driver based on user configuration
	I0923 23:37:38.769768   14805 start.go:297] selected driver: kvm2
	I0923 23:37:38.769774   14805 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:37:38.770088   14805 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:37:38.770209   14805 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:37:38.785880   14805 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:37:38.785932   14805 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:37:38.786641   14805 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 23:37:38.786854   14805 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 23:37:38.786885   14805 cni.go:84] Creating CNI manager for ""
	I0923 23:37:38.786953   14805 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:37:38.786968   14805 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:37:38.787036   14805 start.go:340] cluster config:
	{Name:download-only-098425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-098425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:37:38.787246   14805 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:37:38.789554   14805 out.go:97] Downloading VM boot image ...
	I0923 23:37:38.789598   14805 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0923 23:37:52.154158   14805 out.go:97] Starting "download-only-098425" primary control-plane node in "download-only-098425" cluster
	I0923 23:37:52.154181   14805 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 23:37:52.251881   14805 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 23:37:52.251918   14805 cache.go:56] Caching tarball of preloaded images
	I0923 23:37:52.252092   14805 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 23:37:52.254309   14805 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 23:37:52.254338   14805 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0923 23:37:52.358838   14805 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-098425 host does not exist
	  To start a cluster, run: "minikube start -p download-only-098425"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-098425
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (15.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-446089 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-446089 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.355047564s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (15.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 23:38:21.618836   14793 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0923 23:38:21.618880   14793 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-446089
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-446089: exit status 85 (57.398376ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p download-only-098425        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| delete  | -p download-only-098425        | download-only-098425 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| start   | -o=json --download-only        | download-only-446089 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | -p download-only-446089        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:38:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:38:06.302409   15087 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:38:06.302678   15087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:06.302688   15087 out.go:358] Setting ErrFile to fd 2...
	I0923 23:38:06.302693   15087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:06.302875   15087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:38:06.303437   15087 out.go:352] Setting JSON to true
	I0923 23:38:06.304226   15087 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1230,"bootTime":1727133456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:38:06.304325   15087 start.go:139] virtualization: kvm guest
	I0923 23:38:06.306980   15087 out.go:97] [download-only-446089] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:38:06.307143   15087 notify.go:220] Checking for updates...
	I0923 23:38:06.308585   15087 out.go:169] MINIKUBE_LOCATION=19696
	I0923 23:38:06.310322   15087 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:38:06.311884   15087 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:38:06.313368   15087 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:38:06.314956   15087 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 23:38:06.317546   15087 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 23:38:06.317757   15087 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:38:06.351538   15087 out.go:97] Using the kvm2 driver based on user configuration
	I0923 23:38:06.351573   15087 start.go:297] selected driver: kvm2
	I0923 23:38:06.351580   15087 start.go:901] validating driver "kvm2" against <nil>
	I0923 23:38:06.351936   15087 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:06.352058   15087 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19696-7623/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 23:38:06.367959   15087 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 23:38:06.368010   15087 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:38:06.368596   15087 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 23:38:06.368738   15087 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 23:38:06.368764   15087 cni.go:84] Creating CNI manager for ""
	I0923 23:38:06.368810   15087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0923 23:38:06.368819   15087 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:38:06.368871   15087 start.go:340] cluster config:
	{Name:download-only-446089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-446089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:06.368959   15087 iso.go:125] acquiring lock: {Name:mk8a983e49e920eac32e0f79c34143c3d478115e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:06.370705   15087 out.go:97] Starting "download-only-446089" primary control-plane node in "download-only-446089" cluster
	I0923 23:38:06.370719   15087 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:06.468775   15087 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:38:06.468823   15087 cache.go:56] Caching tarball of preloaded images
	I0923 23:38:06.468996   15087 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:06.471185   15087 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 23:38:06.471210   15087 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0923 23:38:06.573957   15087 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 23:38:20.146531   15087 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0923 23:38:20.146629   15087 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19696-7623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0923 23:38:20.885255   15087 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 23:38:20.885630   15087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/download-only-446089/config.json ...
	I0923 23:38:20.885660   15087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/download-only-446089/config.json: {Name:mk754c0db8d8ec5cca2a167a6830d256a92356e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:20.885808   15087 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 23:38:20.885943   15087 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19696-7623/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-446089 host does not exist
	  To start a cluster, run: "minikube start -p download-only-446089"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-446089
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 23:38:22.183566   14793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-013301 --alsologtostderr --binary-mirror http://127.0.0.1:39559 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-013301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-013301
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-185840 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-185840 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.948573126s)
helpers_test.go:175: Cleaning up "offline-crio-185840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-185840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-185840: (1.055567924s)
--- PASS: TestOffline (82.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-823099
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-823099: exit status 85 (48.801365ms)

                                                
                                                
-- stdout --
	* Profile "addons-823099" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-823099"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-823099
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-823099: exit status 85 (48.848986ms)

                                                
                                                
-- stdout --
	* Profile "addons-823099" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-823099"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (140.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-823099 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-823099 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m20.164883946s)
--- PASS: TestAddons/Setup (140.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-823099 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-823099 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-p2vt7" [cc6c6e90-1588-4844-b1b0-025d8136f3c7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006701158s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-823099
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-823099: (6.538412585s)
--- PASS: TestAddons/parallel/InspektorGadget (11.55s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 23:49:16.718076   14793 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 23:49:16.725178   14793 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 23:49:16.725216   14793 kapi.go:107] duration metric: took 7.157328ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.168722ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-823099 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-823099 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3ac506c7-d374-4eb8-b47d-05be259d9310] Pending
helpers_test.go:344: "task-pv-pod" [3ac506c7-d374-4eb8-b47d-05be259d9310] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3ac506c7-d374-4eb8-b47d-05be259d9310] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00382061s
addons_test.go:528: (dbg) Run:  kubectl --context addons-823099 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-823099 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-823099 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-823099 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-823099 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-823099 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-823099 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e22fe967-2034-4b85-9850-7c0a8c941990] Pending
helpers_test.go:344: "task-pv-pod-restore" [e22fe967-2034-4b85-9850-7c0a8c941990] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e22fe967-2034-4b85-9850-7c0a8c941990] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003866995s
addons_test.go:570: (dbg) Run:  kubectl --context addons-823099 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-823099 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-823099 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.960291918s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-823099 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-m9cqq" [36dd653c-f9ca-4e6a-8375-738200f513b7] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-m9cqq" [36dd653c-f9ca-4e6a-8375-738200f513b7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-m9cqq" [36dd653c-f9ca-4e6a-8375-738200f513b7] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003430389s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 addons disable headlamp --alsologtostderr -v=1: (5.703744419s)
--- PASS: TestAddons/parallel/Headlamp (19.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-gtr2z" [77f56e80-4bd0-46bd-a36c-663eccd9d000] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007180066s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-823099
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-823099 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-823099 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7a9b9645-25c2-4e5f-a219-e0b27f57ae41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7a9b9645-25c2-4e5f-a219-e0b27f57ae41] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7a9b9645-25c2-4e5f-a219-e0b27f57ae41] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00482829s
addons_test.go:938: (dbg) Run:  kubectl --context addons-823099 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 ssh "cat /opt/local-path-provisioner/pvc-eab7f679-3b16-4b54-94e5-e626a1dcbb7e_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-823099 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-823099 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.441024656s)
--- PASS: TestAddons/parallel/LocalPath (55.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2dqft" [c5e363a8-697b-4396-acf2-c41232b01445] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006301676s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-823099
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-d7ls8" [3b6ecaca-f192-4b49-80fe-9e9d33a3434a] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005932849s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-823099 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-823099 addons disable yakd --alsologtostderr -v=1: (5.738939113s)
--- PASS: TestAddons/parallel/Yakd (10.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-823099
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-823099: (7.285408305s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-823099
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-823099
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-823099
--- PASS: TestAddons/StoppedEnableDisable (7.55s)

                                                
                                    
x
+
TestCertOptions (70.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-393867 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0924 00:53:38.362548   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-393867 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m8.719658124s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-393867 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-393867 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-393867 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-393867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-393867
--- PASS: TestCertOptions (70.16s)

                                                
                                    
x
+
TestCertExpiration (255.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-811247 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-811247 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (46.077711191s)
E0924 00:53:21.433960   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-811247 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-811247 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (28.586387262s)
helpers_test.go:175: Cleaning up "cert-expiration-811247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-811247
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-811247: (1.037438384s)
--- PASS: TestCertExpiration (255.70s)

                                                
                                    
x
+
TestForceSystemdFlag (70.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-912275 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-912275 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.693835267s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-912275 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-912275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-912275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-912275: (1.172801721s)
--- PASS: TestForceSystemdFlag (70.07s)

                                                
                                    
x
+
TestForceSystemdEnv (43.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-762606 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-762606 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.346151386s)
helpers_test.go:175: Cleaning up "force-systemd-env-762606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-762606
--- PASS: TestForceSystemdEnv (43.20s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.84s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0924 00:52:10.535093   14793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 00:52:10.535235   14793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0924 00:52:10.564602   14793 install.go:62] docker-machine-driver-kvm2: exit status 1
W0924 00:52:10.564851   14793 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0924 00:52:10.564890   14793 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3613358839/001/docker-machine-driver-kvm2
I0924 00:52:10.822868   14793 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3613358839/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000015c30 gz:0xc000015c38 tar:0xc000015b00 tar.bz2:0xc000015bd0 tar.gz:0xc000015c00 tar.xz:0xc000015c10 tar.zst:0xc000015c20 tbz2:0xc000015bd0 tgz:0xc000015c00 txz:0xc000015c10 tzst:0xc000015c20 xz:0xc000015c40 zip:0xc000015c50 zst:0xc000015c48] Getters:map[file:0xc001af7ad0 http:0xc000814ff0 https:0xc000815040] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0924 00:52:10.822918   14793 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3613358839/001/docker-machine-driver-kvm2
I0924 00:52:13.343269   14793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 00:52:13.343388   14793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0924 00:52:13.376093   14793 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0924 00:52:13.376130   14793 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0924 00:52:13.376221   14793 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0924 00:52:13.376271   14793 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3613358839/002/docker-machine-driver-kvm2
I0924 00:52:13.440102   14793 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3613358839/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000015c30 gz:0xc000015c38 tar:0xc000015b00 tar.bz2:0xc000015bd0 tar.gz:0xc000015c00 tar.xz:0xc000015c10 tar.zst:0xc000015c20 tbz2:0xc000015bd0 tgz:0xc000015c00 txz:0xc000015c10 tzst:0xc000015c20 xz:0xc000015c40 zip:0xc000015c50 zst:0xc000015c48] Getters:map[file:0xc00089c840 http:0xc000142ff0 https:0xc000143450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0924 00:52:13.440162   14793 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3613358839/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.84s)

                                                
                                    
x
+
TestErrorSpam/setup (37.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-718374 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-718374 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-718374 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-718374 --driver=kvm2  --container-runtime=crio: (37.976950741s)
--- PASS: TestErrorSpam/setup (37.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (6.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 stop: (2.28578267s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 stop: (1.877657281s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-718374 --log_dir /tmp/nospam-718374 stop: (1.999760508s)
--- PASS: TestErrorSpam/stop (6.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19696-7623/.minikube/files/etc/test/nested/copy/14793/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-666615 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0923 23:55:43.336421   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:43.342857   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:43.354256   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:43.375702   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:43.417182   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:43.498718   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:43.660279   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:43.981980   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:44.624121   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:45.905795   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:48.468702   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:55:53.590653   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:03.832260   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:24.314230   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-666615 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.783232119s)
--- PASS: TestFunctional/serial/StartWithProxy (83.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 23:56:56.571007   14793 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-666615 --alsologtostderr -v=8
E0923 23:57:05.276439   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-666615 --alsologtostderr -v=8: (54.149192477s)
functional_test.go:663: soft start took 54.149897216s for "functional-666615" cluster.
I0923 23:57:50.720507   14793 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (54.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-666615 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 cache add registry.k8s.io/pause:3.1: (1.435071541s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 cache add registry.k8s.io/pause:3.3: (1.454038849s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 cache add registry.k8s.io/pause:latest: (1.448919036s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-666615 /tmp/TestFunctionalserialCacheCmdcacheadd_local3621903573/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cache add minikube-local-cache-test:functional-666615
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 cache add minikube-local-cache-test:functional-666615: (1.77233553s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cache delete minikube-local-cache-test:functional-666615
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-666615
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (219.000869ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 cache reload: (1.137096497s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 kubectl -- --context functional-666615 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-666615 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-666615 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0923 23:58:27.201529   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-666615 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.855549797s)
functional_test.go:761: restart took 31.855679422s for "functional-666615" cluster.
I0923 23:58:31.549627   14793 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (31.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-666615 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 logs: (1.302208382s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 logs --file /tmp/TestFunctionalserialLogsFileCmd2577304698/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 logs --file /tmp/TestFunctionalserialLogsFileCmd2577304698/001/logs.txt: (1.372746807s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-666615 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-666615
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-666615: exit status 115 (273.310341ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.162:31142 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-666615 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 config get cpus: exit status 14 (58.69314ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 config get cpus: exit status 14 (44.564951ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-666615 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-666615 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24146: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-666615 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-666615 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.723728ms)

                                                
                                                
-- stdout --
	* [functional-666615] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 23:58:39.789450   23947 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:58:39.789594   23947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:58:39.789605   23947 out.go:358] Setting ErrFile to fd 2...
	I0923 23:58:39.789612   23947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:58:39.789913   23947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:58:39.790622   23947 out.go:352] Setting JSON to false
	I0923 23:58:39.792008   23947 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2464,"bootTime":1727133456,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:58:39.792165   23947 start.go:139] virtualization: kvm guest
	I0923 23:58:39.794470   23947 out.go:177] * [functional-666615] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:58:39.795872   23947 notify.go:220] Checking for updates...
	I0923 23:58:39.795921   23947 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:58:39.797401   23947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:58:39.798913   23947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:58:39.800377   23947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:58:39.801640   23947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:58:39.803125   23947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:58:39.805054   23947 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:58:39.805714   23947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:58:39.805770   23947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:58:39.823191   23947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
	I0923 23:58:39.823700   23947 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:58:39.824295   23947 main.go:141] libmachine: Using API Version  1
	I0923 23:58:39.824320   23947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:58:39.824727   23947 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:58:39.824919   23947 main.go:141] libmachine: (functional-666615) Calling .DriverName
	I0923 23:58:39.825153   23947 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:58:39.825580   23947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:58:39.825625   23947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:58:39.848760   23947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42963
	I0923 23:58:39.849286   23947 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:58:39.849925   23947 main.go:141] libmachine: Using API Version  1
	I0923 23:58:39.849953   23947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:58:39.850319   23947 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:58:39.850529   23947 main.go:141] libmachine: (functional-666615) Calling .DriverName
	I0923 23:58:39.890940   23947 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 23:58:39.892136   23947 start.go:297] selected driver: kvm2
	I0923 23:58:39.892154   23947 start.go:901] validating driver "kvm2" against &{Name:functional-666615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-666615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:58:39.892278   23947 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:58:39.894663   23947 out.go:201] 
	W0923 23:58:39.896227   23947 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 23:58:39.897558   23947 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-666615 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-666615 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-666615 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.960258ms)

                                                
                                                
-- stdout --
	* [functional-666615] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 23:58:39.638387   23898 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:58:39.639001   23898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:58:39.639031   23898 out.go:358] Setting ErrFile to fd 2...
	I0923 23:58:39.639043   23898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:58:39.639308   23898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0923 23:58:39.639823   23898 out.go:352] Setting JSON to false
	I0923 23:58:39.640810   23898 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2464,"bootTime":1727133456,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:58:39.640894   23898 start.go:139] virtualization: kvm guest
	I0923 23:58:39.642549   23898 out.go:177] * [functional-666615] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0923 23:58:39.644538   23898 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:58:39.644554   23898 notify.go:220] Checking for updates...
	I0923 23:58:39.646764   23898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:58:39.648612   23898 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0923 23:58:39.650431   23898 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0923 23:58:39.651999   23898 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:58:39.653524   23898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:58:39.655532   23898 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 23:58:39.656579   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:58:39.656798   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:58:39.675476   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I0923 23:58:39.675979   23898 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:58:39.676624   23898 main.go:141] libmachine: Using API Version  1
	I0923 23:58:39.676644   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:58:39.677005   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:58:39.677209   23898 main.go:141] libmachine: (functional-666615) Calling .DriverName
	I0923 23:58:39.677523   23898 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:58:39.677864   23898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0923 23:58:39.677898   23898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 23:58:39.694763   23898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0923 23:58:39.695265   23898 main.go:141] libmachine: () Calling .GetVersion
	I0923 23:58:39.695813   23898 main.go:141] libmachine: Using API Version  1
	I0923 23:58:39.695840   23898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 23:58:39.696158   23898 main.go:141] libmachine: () Calling .GetMachineName
	I0923 23:58:39.696434   23898 main.go:141] libmachine: (functional-666615) Calling .DriverName
	I0923 23:58:39.732980   23898 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0923 23:58:39.734157   23898 start.go:297] selected driver: kvm2
	I0923 23:58:39.734175   23898 start.go:901] validating driver "kvm2" against &{Name:functional-666615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-666615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:58:39.734317   23898 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:58:39.736572   23898 out.go:201] 
	W0923 23:58:39.737732   23898 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 23:58:39.738895   23898 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-666615 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-666615 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cplw6" [37844b96-88be-41fb-9bca-91d02bf30469] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cplw6" [37844b96-88be-41fb-9bca-91d02bf30469] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004136072s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.162:31266
functional_test.go:1675: http://192.168.39.162:31266: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-cplw6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.162:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.162:31266
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [36624dc2-ce0b-418e-b2fa-a5aafeb8f75b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004609121s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-666615 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-666615 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-666615 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-666615 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0b84bae9-3dda-4066-8e96-a34ef3337183] Pending
helpers_test.go:344: "sp-pod" [0b84bae9-3dda-4066-8e96-a34ef3337183] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0b84bae9-3dda-4066-8e96-a34ef3337183] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004669952s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-666615 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-666615 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-666615 delete -f testdata/storage-provisioner/pod.yaml: (1.700026287s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-666615 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d76a8860-fe50-4acb-a15f-dda570abfe45] Pending
helpers_test.go:344: "sp-pod" [d76a8860-fe50-4acb-a15f-dda570abfe45] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d76a8860-fe50-4acb-a15f-dda570abfe45] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004836384s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-666615 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh -n functional-666615 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cp functional-666615:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3469359164/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh -n functional-666615 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh -n functional-666615 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-666615 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-smxlw" [fa539422-2ff1-4715-a7b0-04559cf389ab] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-smxlw" [fa539422-2ff1-4715-a7b0-04559cf389ab] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.004663776s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-666615 exec mysql-6cdb49bbb-smxlw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-666615 exec mysql-6cdb49bbb-smxlw -- mysql -ppassword -e "show databases;": exit status 1 (161.498978ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 23:59:22.399497   14793 retry.go:31] will retry after 1.087524006s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-666615 exec mysql-6cdb49bbb-smxlw -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-666615 exec mysql-6cdb49bbb-smxlw -- mysql -ppassword -e "show databases;": exit status 1 (138.72636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 23:59:23.626680   14793 retry.go:31] will retry after 1.779114945s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-666615 exec mysql-6cdb49bbb-smxlw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14793/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo cat /etc/test/nested/copy/14793/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14793.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo cat /etc/ssl/certs/14793.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14793.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo cat /usr/share/ca-certificates/14793.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/147932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo cat /etc/ssl/certs/147932.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/147932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo cat /usr/share/ca-certificates/147932.pem"
2024/09/23 23:58:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-666615 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh "sudo systemctl is-active docker": exit status 1 (235.51834ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh "sudo systemctl is-active containerd": exit status 1 (244.480771ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-666615 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-666615 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-zsmff" [e22fe749-15ec-4ed9-a8d9-c000f5047e96] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-zsmff" [e22fe749-15ec-4ed9-a8d9-c000f5047e96] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.008016186s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdany-port3197911058/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727135918781927747" to /tmp/TestFunctionalparallelMountCmdany-port3197911058/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727135918781927747" to /tmp/TestFunctionalparallelMountCmdany-port3197911058/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727135918781927747" to /tmp/TestFunctionalparallelMountCmdany-port3197911058/001/test-1727135918781927747
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (237.192588ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 23:58:39.019442   14793 retry.go:31] will retry after 260.655477ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 23:58 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 23:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 23:58 test-1727135918781927747
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh cat /mount-9p/test-1727135918781927747
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-666615 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a4876b87-a9cb-42b4-b360-1e314d59db81] Pending
helpers_test.go:344: "busybox-mount" [a4876b87-a9cb-42b4-b360-1e314d59db81] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a4876b87-a9cb-42b4-b360-1e314d59db81] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a4876b87-a9cb-42b4-b360-1e314d59db81] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.004491125s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-666615 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdany-port3197911058/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "289.419281ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "41.768032ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "310.082379ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "64.003306ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 service list -o json
functional_test.go:1494: Took "447.745185ms" to run "out/minikube-linux-amd64 -p functional-666615 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.162:32666
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.162:32666
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdspecific-port3167624347/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (255.94915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 23:58:50.647517   14793 retry.go:31] will retry after 360.053065ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdspecific-port3167624347/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh "sudo umount -f /mount-9p": exit status 1 (240.305631ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-666615 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdspecific-port3167624347/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2150060130/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2150060130/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2150060130/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T" /mount1: exit status 1 (286.501593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 23:58:52.475375   14793 retry.go:31] will retry after 699.252192ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-666615 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2150060130/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2150060130/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-666615 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2150060130/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-666615 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-666615
localhost/kicbase/echo-server:functional-666615
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-666615 image ls --format short --alsologtostderr:
I0923 23:59:15.306754   25844 out.go:345] Setting OutFile to fd 1 ...
I0923 23:59:15.307059   25844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.307070   25844 out.go:358] Setting ErrFile to fd 2...
I0923 23:59:15.307076   25844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.307356   25844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
I0923 23:59:15.308080   25844 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.308201   25844 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.308714   25844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.308771   25844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.323984   25844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
I0923 23:59:15.324600   25844 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.325204   25844 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.325239   25844 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.325659   25844 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.325849   25844 main.go:141] libmachine: (functional-666615) Calling .GetState
I0923 23:59:15.328257   25844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.328310   25844 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.345113   25844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
I0923 23:59:15.345795   25844 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.346371   25844 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.346399   25844 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.346741   25844 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.347065   25844 main.go:141] libmachine: (functional-666615) Calling .DriverName
I0923 23:59:15.347234   25844 ssh_runner.go:195] Run: systemctl --version
I0923 23:59:15.347254   25844 main.go:141] libmachine: (functional-666615) Calling .GetSSHHostname
I0923 23:59:15.350456   25844 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.350864   25844 main.go:141] libmachine: (functional-666615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:69:ec", ip: ""} in network mk-functional-666615: {Iface:virbr1 ExpiryTime:2024-09-24 00:55:46 +0000 UTC Type:0 Mac:52:54:00:6f:69:ec Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-666615 Clientid:01:52:54:00:6f:69:ec}
I0923 23:59:15.350888   25844 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined IP address 192.168.39.162 and MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.351092   25844 main.go:141] libmachine: (functional-666615) Calling .GetSSHPort
I0923 23:59:15.351257   25844 main.go:141] libmachine: (functional-666615) Calling .GetSSHKeyPath
I0923 23:59:15.351394   25844 main.go:141] libmachine: (functional-666615) Calling .GetSSHUsername
I0923 23:59:15.351530   25844 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/functional-666615/id_rsa Username:docker}
I0923 23:59:15.434983   25844 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 23:59:15.507370   25844 main.go:141] libmachine: Making call to close driver server
I0923 23:59:15.507383   25844 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:15.507667   25844 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:15.507683   25844 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 23:59:15.507686   25844 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
I0923 23:59:15.507691   25844 main.go:141] libmachine: Making call to close driver server
I0923 23:59:15.507700   25844 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:15.507966   25844 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:15.507982   25844 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 23:59:15.508023   25844 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-666615 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-666615  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-666615  | 0e79f57d0677e | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-666615 image ls --format table --alsologtostderr:
I0923 23:59:15.830968   25978 out.go:345] Setting OutFile to fd 1 ...
I0923 23:59:15.831183   25978 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.831191   25978 out.go:358] Setting ErrFile to fd 2...
I0923 23:59:15.831195   25978 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.831375   25978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
I0923 23:59:15.831956   25978 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.832049   25978 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.832592   25978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.832633   25978 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.847720   25978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
I0923 23:59:15.848261   25978 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.848957   25978 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.848992   25978 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.849458   25978 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.849705   25978 main.go:141] libmachine: (functional-666615) Calling .GetState
I0923 23:59:15.851894   25978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.851952   25978 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.867271   25978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
I0923 23:59:15.867862   25978 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.868686   25978 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.868717   25978 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.869033   25978 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.869261   25978 main.go:141] libmachine: (functional-666615) Calling .DriverName
I0923 23:59:15.869473   25978 ssh_runner.go:195] Run: systemctl --version
I0923 23:59:15.869494   25978 main.go:141] libmachine: (functional-666615) Calling .GetSSHHostname
I0923 23:59:15.872401   25978 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.872760   25978 main.go:141] libmachine: (functional-666615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:69:ec", ip: ""} in network mk-functional-666615: {Iface:virbr1 ExpiryTime:2024-09-24 00:55:46 +0000 UTC Type:0 Mac:52:54:00:6f:69:ec Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-666615 Clientid:01:52:54:00:6f:69:ec}
I0923 23:59:15.872798   25978 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined IP address 192.168.39.162 and MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.872911   25978 main.go:141] libmachine: (functional-666615) Calling .GetSSHPort
I0923 23:59:15.873114   25978 main.go:141] libmachine: (functional-666615) Calling .GetSSHKeyPath
I0923 23:59:15.873265   25978 main.go:141] libmachine: (functional-666615) Calling .GetSSHUsername
I0923 23:59:15.873421   25978 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/functional-666615/id_rsa Username:docker}
I0923 23:59:15.983690   25978 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 23:59:16.076368   25978 main.go:141] libmachine: Making call to close driver server
I0923 23:59:16.076388   25978 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:16.076705   25978 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
I0923 23:59:16.076709   25978 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:16.076736   25978 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 23:59:16.076749   25978 main.go:141] libmachine: Making call to close driver server
I0923 23:59:16.076756   25978 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:16.076945   25978 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
I0923 23:59:16.076968   25978 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:16.076981   25978 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-666615 image ls --format json --alsologtostderr:
[{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-666615"],"size":"4943877"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"
},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDi
gests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0e79f57d0677e1cb2cadb93817c0eeca3b0fc691a2118fe3ced806298325d438","repoDigests":["localhost/minikube-local-cache-test@sha256:3d26462db8d703c3efc57a9e74b7835adb7168afe8eb1e587d5b02a689108ee1"],"repoTags":["localhost/minikube-local-cache-test:functional-666615"],"size":"3328"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba08
0558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"350b
164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","
repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registr
y.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-666615 image ls --format json --alsologtostderr:
I0923 23:59:15.593432   25901 out.go:345] Setting OutFile to fd 1 ...
I0923 23:59:15.593524   25901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.593528   25901 out.go:358] Setting ErrFile to fd 2...
I0923 23:59:15.593532   25901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.593710   25901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
I0923 23:59:15.594294   25901 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.594391   25901 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.594724   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.594760   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.610447   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
I0923 23:59:15.611067   25901 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.611745   25901 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.611778   25901 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.612195   25901 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.612445   25901 main.go:141] libmachine: (functional-666615) Calling .GetState
I0923 23:59:15.614612   25901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.614669   25901 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.629822   25901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34555
I0923 23:59:15.630269   25901 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.630829   25901 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.630852   25901 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.631178   25901 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.631363   25901 main.go:141] libmachine: (functional-666615) Calling .DriverName
I0923 23:59:15.631533   25901 ssh_runner.go:195] Run: systemctl --version
I0923 23:59:15.631568   25901 main.go:141] libmachine: (functional-666615) Calling .GetSSHHostname
I0923 23:59:15.634812   25901 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.635336   25901 main.go:141] libmachine: (functional-666615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:69:ec", ip: ""} in network mk-functional-666615: {Iface:virbr1 ExpiryTime:2024-09-24 00:55:46 +0000 UTC Type:0 Mac:52:54:00:6f:69:ec Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-666615 Clientid:01:52:54:00:6f:69:ec}
I0923 23:59:15.635365   25901 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined IP address 192.168.39.162 and MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.635521   25901 main.go:141] libmachine: (functional-666615) Calling .GetSSHPort
I0923 23:59:15.635692   25901 main.go:141] libmachine: (functional-666615) Calling .GetSSHKeyPath
I0923 23:59:15.635835   25901 main.go:141] libmachine: (functional-666615) Calling .GetSSHUsername
I0923 23:59:15.635995   25901 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/functional-666615/id_rsa Username:docker}
I0923 23:59:15.720032   25901 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 23:59:15.771304   25901 main.go:141] libmachine: Making call to close driver server
I0923 23:59:15.771321   25901 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:15.771547   25901 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
I0923 23:59:15.771555   25901 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:15.771582   25901 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 23:59:15.771591   25901 main.go:141] libmachine: Making call to close driver server
I0923 23:59:15.771599   25901 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:15.771789   25901 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:15.771802   25901 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-666615 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-666615
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 0e79f57d0677e1cb2cadb93817c0eeca3b0fc691a2118fe3ced806298325d438
repoDigests:
- localhost/minikube-local-cache-test@sha256:3d26462db8d703c3efc57a9e74b7835adb7168afe8eb1e587d5b02a689108ee1
repoTags:
- localhost/minikube-local-cache-test:functional-666615
size: "3328"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-666615 image ls --format yaml --alsologtostderr:
I0923 23:59:15.310728   25843 out.go:345] Setting OutFile to fd 1 ...
I0923 23:59:15.310837   25843 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.310848   25843 out.go:358] Setting ErrFile to fd 2...
I0923 23:59:15.310852   25843 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.311031   25843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
I0923 23:59:15.311656   25843 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.311762   25843 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.312129   25843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.312172   25843 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.327260   25843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43967
I0923 23:59:15.327888   25843 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.328559   25843 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.328590   25843 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.328922   25843 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.329112   25843 main.go:141] libmachine: (functional-666615) Calling .GetState
I0923 23:59:15.330975   25843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.331031   25843 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.345115   25843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
I0923 23:59:15.345485   25843 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.345970   25843 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.345990   25843 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.346441   25843 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.346624   25843 main.go:141] libmachine: (functional-666615) Calling .DriverName
I0923 23:59:15.346849   25843 ssh_runner.go:195] Run: systemctl --version
I0923 23:59:15.346877   25843 main.go:141] libmachine: (functional-666615) Calling .GetSSHHostname
I0923 23:59:15.350344   25843 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.350760   25843 main.go:141] libmachine: (functional-666615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:69:ec", ip: ""} in network mk-functional-666615: {Iface:virbr1 ExpiryTime:2024-09-24 00:55:46 +0000 UTC Type:0 Mac:52:54:00:6f:69:ec Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-666615 Clientid:01:52:54:00:6f:69:ec}
I0923 23:59:15.350786   25843 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined IP address 192.168.39.162 and MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.350919   25843 main.go:141] libmachine: (functional-666615) Calling .GetSSHPort
I0923 23:59:15.351089   25843 main.go:141] libmachine: (functional-666615) Calling .GetSSHKeyPath
I0923 23:59:15.351214   25843 main.go:141] libmachine: (functional-666615) Calling .GetSSHUsername
I0923 23:59:15.351336   25843 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/functional-666615/id_rsa Username:docker}
I0923 23:59:15.446701   25843 ssh_runner.go:195] Run: sudo crictl images --output json
I0923 23:59:15.527650   25843 main.go:141] libmachine: Making call to close driver server
I0923 23:59:15.527662   25843 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:15.527948   25843 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
I0923 23:59:15.528002   25843 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:15.528030   25843 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 23:59:15.528049   25843 main.go:141] libmachine: Making call to close driver server
I0923 23:59:15.528060   25843 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:15.528427   25843 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:15.528476   25843 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-666615 ssh pgrep buildkitd: exit status 1 (198.669325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image build -t localhost/my-image:functional-666615 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 image build -t localhost/my-image:functional-666615 testdata/build --alsologtostderr: (3.797292188s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-666615 image build -t localhost/my-image:functional-666615 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 72bff0e8d8d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-666615
--> 0e6ce283b38
Successfully tagged localhost/my-image:functional-666615
0e6ce283b38093085c15c09b5a559468ab86a2938af01e50d328c139ff1bdfd5
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-666615 image build -t localhost/my-image:functional-666615 testdata/build --alsologtostderr:
I0923 23:59:15.753520   25943 out.go:345] Setting OutFile to fd 1 ...
I0923 23:59:15.753675   25943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.753686   25943 out.go:358] Setting ErrFile to fd 2...
I0923 23:59:15.753692   25943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:59:15.753889   25943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
I0923 23:59:15.754502   25943 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.754996   25943 config.go:182] Loaded profile config "functional-666615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 23:59:15.755388   25943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.755432   25943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.770690   25943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
I0923 23:59:15.771209   25943 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.771802   25943 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.771815   25943 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.772208   25943 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.772439   25943 main.go:141] libmachine: (functional-666615) Calling .GetState
I0923 23:59:15.774346   25943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0923 23:59:15.774389   25943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 23:59:15.790639   25943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
I0923 23:59:15.791083   25943 main.go:141] libmachine: () Calling .GetVersion
I0923 23:59:15.791720   25943 main.go:141] libmachine: Using API Version  1
I0923 23:59:15.791755   25943 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 23:59:15.792103   25943 main.go:141] libmachine: () Calling .GetMachineName
I0923 23:59:15.792351   25943 main.go:141] libmachine: (functional-666615) Calling .DriverName
I0923 23:59:15.792528   25943 ssh_runner.go:195] Run: systemctl --version
I0923 23:59:15.792567   25943 main.go:141] libmachine: (functional-666615) Calling .GetSSHHostname
I0923 23:59:15.795942   25943 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.796440   25943 main.go:141] libmachine: (functional-666615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:69:ec", ip: ""} in network mk-functional-666615: {Iface:virbr1 ExpiryTime:2024-09-24 00:55:46 +0000 UTC Type:0 Mac:52:54:00:6f:69:ec Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-666615 Clientid:01:52:54:00:6f:69:ec}
I0923 23:59:15.796478   25943 main.go:141] libmachine: (functional-666615) DBG | domain functional-666615 has defined IP address 192.168.39.162 and MAC address 52:54:00:6f:69:ec in network mk-functional-666615
I0923 23:59:15.796640   25943 main.go:141] libmachine: (functional-666615) Calling .GetSSHPort
I0923 23:59:15.796809   25943 main.go:141] libmachine: (functional-666615) Calling .GetSSHKeyPath
I0923 23:59:15.797030   25943 main.go:141] libmachine: (functional-666615) Calling .GetSSHUsername
I0923 23:59:15.797270   25943 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/functional-666615/id_rsa Username:docker}
I0923 23:59:15.897870   25943 build_images.go:161] Building image from path: /tmp/build.224860763.tar
I0923 23:59:15.897943   25943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 23:59:15.916284   25943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.224860763.tar
I0923 23:59:15.922767   25943 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.224860763.tar: stat -c "%s %y" /var/lib/minikube/build/build.224860763.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.224860763.tar': No such file or directory
I0923 23:59:15.922804   25943 ssh_runner.go:362] scp /tmp/build.224860763.tar --> /var/lib/minikube/build/build.224860763.tar (3072 bytes)
I0923 23:59:15.989277   25943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.224860763
I0923 23:59:16.005317   25943 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.224860763 -xf /var/lib/minikube/build/build.224860763.tar
I0923 23:59:16.055982   25943 crio.go:315] Building image: /var/lib/minikube/build/build.224860763
I0923 23:59:16.056037   25943 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-666615 /var/lib/minikube/build/build.224860763 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0923 23:59:19.478958   25943 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-666615 /var/lib/minikube/build/build.224860763 --cgroup-manager=cgroupfs: (3.422888584s)
I0923 23:59:19.479024   25943 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.224860763
I0923 23:59:19.493631   25943 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.224860763.tar
I0923 23:59:19.503530   25943 build_images.go:217] Built localhost/my-image:functional-666615 from /tmp/build.224860763.tar
I0923 23:59:19.503561   25943 build_images.go:133] succeeded building to: functional-666615
I0923 23:59:19.503566   25943 build_images.go:134] failed building to: 
I0923 23:59:19.503585   25943 main.go:141] libmachine: Making call to close driver server
I0923 23:59:19.503596   25943 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:19.503864   25943 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
I0923 23:59:19.503887   25943 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:19.503901   25943 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 23:59:19.503915   25943 main.go:141] libmachine: Making call to close driver server
I0923 23:59:19.503929   25943 main.go:141] libmachine: (functional-666615) Calling .Close
I0923 23:59:19.504144   25943 main.go:141] libmachine: Successfully made call to close driver server
I0923 23:59:19.504160   25943 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 23:59:19.504195   25943 main.go:141] libmachine: (functional-666615) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.794037249s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-666615
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image load --daemon kicbase/echo-server:functional-666615 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 image load --daemon kicbase/echo-server:functional-666615 --alsologtostderr: (1.097824202s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image load --daemon kicbase/echo-server:functional-666615 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 image load --daemon kicbase/echo-server:functional-666615 --alsologtostderr: (3.335353314s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-666615
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image load --daemon kicbase/echo-server:functional-666615 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image save kicbase/echo-server:functional-666615 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image rm kicbase/echo-server:functional-666615 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 image rm kicbase/echo-server:functional-666615 --alsologtostderr: (1.448848325s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.426296874s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-666615
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-666615 image save --daemon kicbase/echo-server:functional-666615 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-666615 image save --daemon kicbase/echo-server:functional-666615 --alsologtostderr: (4.089819698s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-666615
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.13s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-666615
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-666615
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-666615
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-959539 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 00:00:43.333131   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:01:11.043641   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-959539 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.866574443s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (205.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-959539 -- rollout status deployment/busybox: (4.999794389s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-7q7xr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-m5qhr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-w9v6l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-7q7xr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-m5qhr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-w9v6l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-7q7xr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-m5qhr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-w9v6l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-7q7xr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-7q7xr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-m5qhr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-m5qhr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-w9v6l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-959539 -- exec busybox-7dff88458-w9v6l -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-959539 -v=7 --alsologtostderr
E0924 00:03:38.362359   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:38.368826   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:38.380283   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:38.401746   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:38.443234   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:38.524703   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:38.686501   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:39.008735   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:39.650067   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:40.931528   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:43.493431   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:03:48.615226   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-959539 -v=7 --alsologtostderr: (57.924581745s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
E0924 00:03:58.857119   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-959539 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp testdata/cp-test.txt ha-959539:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539:/home/docker/cp-test.txt ha-959539-m02:/home/docker/cp-test_ha-959539_ha-959539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test_ha-959539_ha-959539-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539:/home/docker/cp-test.txt ha-959539-m03:/home/docker/cp-test_ha-959539_ha-959539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test_ha-959539_ha-959539-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539:/home/docker/cp-test.txt ha-959539-m04:/home/docker/cp-test_ha-959539_ha-959539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test_ha-959539_ha-959539-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp testdata/cp-test.txt ha-959539-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m02:/home/docker/cp-test.txt ha-959539:/home/docker/cp-test_ha-959539-m02_ha-959539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test_ha-959539-m02_ha-959539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m02:/home/docker/cp-test.txt ha-959539-m03:/home/docker/cp-test_ha-959539-m02_ha-959539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test_ha-959539-m02_ha-959539-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m02:/home/docker/cp-test.txt ha-959539-m04:/home/docker/cp-test_ha-959539-m02_ha-959539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test_ha-959539-m02_ha-959539-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp testdata/cp-test.txt ha-959539-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt ha-959539:/home/docker/cp-test_ha-959539-m03_ha-959539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test_ha-959539-m03_ha-959539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt ha-959539-m02:/home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test_ha-959539-m03_ha-959539-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m03:/home/docker/cp-test.txt ha-959539-m04:/home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test_ha-959539-m03_ha-959539-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp testdata/cp-test.txt ha-959539-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4152452105/001/cp-test_ha-959539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt ha-959539:/home/docker/cp-test_ha-959539-m04_ha-959539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539 "sudo cat /home/docker/cp-test_ha-959539-m04_ha-959539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt ha-959539-m02:/home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m02 "sudo cat /home/docker/cp-test_ha-959539-m04_ha-959539-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 cp ha-959539-m04:/home/docker/cp-test.txt ha-959539-m03:/home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 ssh -n ha-959539-m03 "sudo cat /home/docker/cp-test_ha-959539-m04_ha-959539-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.170458471s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-959539 node delete m03 -v=7 --alsologtostderr: (15.768534661s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (349.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-959539 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 00:15:43.336159   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:18:38.365962   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:20:01.429721   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:20:43.333462   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-959539 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m48.422861393s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (349.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-959539 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-959539 --control-plane -v=7 --alsologtostderr: (1m14.439690513s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-959539 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-191251 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0924 00:23:38.361507   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-191251 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.345789385s)
--- PASS: TestJSONOutput/start/Command (82.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-191251 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-191251 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-191251 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-191251 --output=json --user=testUser: (6.663662239s)
--- PASS: TestJSONOutput/stop/Command (6.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-062192 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-062192 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.774837ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d5a1227-d803-497f-9909-2a053817c383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-062192] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"36f81f8b-f29f-46f0-a5c5-897ce4b69f31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"436767db-48bb-47dd-b23f-93b788ca0bf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a486589e-d228-4a35-87be-a266f9a78ba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig"}}
	{"specversion":"1.0","id":"0982af07-9552-4529-b2b0-ba79284e9d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube"}}
	{"specversion":"1.0","id":"2537705f-3399-4ee9-baca-d116e89d201f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6ff7f3c2-cd2b-4e16-b19e-53a2539898f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8a2453e8-665c-443a-a6bf-29ada2ea50c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-062192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-062192
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (90.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-679415 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-679415 --driver=kvm2  --container-runtime=crio: (44.040685105s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-693441 --driver=kvm2  --container-runtime=crio
E0924 00:25:43.336588   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-693441 --driver=kvm2  --container-runtime=crio: (43.15086602s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-679415
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-693441
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-693441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-693441
helpers_test.go:175: Cleaning up "first-679415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-679415
--- PASS: TestMinikubeProfile (90.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-853782 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-853782 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.436607326s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853782 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853782 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-866120 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-866120 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.886587078s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-866120 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-866120 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-853782 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-866120 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-866120 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-866120
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-866120: (1.26929025s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-866120
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-866120: (21.906661945s)
--- PASS: TestMountStart/serial/RestartStopped (22.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-866120 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-866120 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-246036 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 00:28:38.362461   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:28:46.407561   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-246036 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m43.732747872s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-246036 -- rollout status deployment/busybox: (4.273985559s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-2cxmd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-b5dpk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-2cxmd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-b5dpk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-2cxmd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-b5dpk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-2cxmd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-2cxmd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-b5dpk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-246036 -- exec busybox-7dff88458-b5dpk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-246036 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-246036 -v 3 --alsologtostderr: (46.742795509s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-246036 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp testdata/cp-test.txt multinode-246036:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile589421806/001/cp-test_multinode-246036.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036:/home/docker/cp-test.txt multinode-246036-m02:/home/docker/cp-test_multinode-246036_multinode-246036-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m02 "sudo cat /home/docker/cp-test_multinode-246036_multinode-246036-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036:/home/docker/cp-test.txt multinode-246036-m03:/home/docker/cp-test_multinode-246036_multinode-246036-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m03 "sudo cat /home/docker/cp-test_multinode-246036_multinode-246036-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp testdata/cp-test.txt multinode-246036-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile589421806/001/cp-test_multinode-246036-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt multinode-246036:/home/docker/cp-test_multinode-246036-m02_multinode-246036.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036 "sudo cat /home/docker/cp-test_multinode-246036-m02_multinode-246036.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036-m02:/home/docker/cp-test.txt multinode-246036-m03:/home/docker/cp-test_multinode-246036-m02_multinode-246036-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m03 "sudo cat /home/docker/cp-test_multinode-246036-m02_multinode-246036-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp testdata/cp-test.txt multinode-246036-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile589421806/001/cp-test_multinode-246036-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt multinode-246036:/home/docker/cp-test_multinode-246036-m03_multinode-246036.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036 "sudo cat /home/docker/cp-test_multinode-246036-m03_multinode-246036.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 cp multinode-246036-m03:/home/docker/cp-test.txt multinode-246036-m02:/home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 ssh -n multinode-246036-m02 "sudo cat /home/docker/cp-test_multinode-246036-m03_multinode-246036-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-246036 node stop m03: (1.353589315s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-246036 status: exit status 7 (414.19294ms)

                                                
                                                
-- stdout --
	multinode-246036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-246036-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-246036-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr: exit status 7 (429.651785ms)

                                                
                                                
-- stdout --
	multinode-246036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-246036-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-246036-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:29:58.211215   43316 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:29:58.211495   43316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:29:58.211506   43316 out.go:358] Setting ErrFile to fd 2...
	I0924 00:29:58.211510   43316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:29:58.211700   43316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:29:58.211867   43316 out.go:352] Setting JSON to false
	I0924 00:29:58.211897   43316 mustload.go:65] Loading cluster: multinode-246036
	I0924 00:29:58.212017   43316 notify.go:220] Checking for updates...
	I0924 00:29:58.212294   43316 config.go:182] Loaded profile config "multinode-246036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:29:58.212310   43316 status.go:174] checking status of multinode-246036 ...
	I0924 00:29:58.212787   43316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:29:58.212848   43316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:29:58.227903   43316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0924 00:29:58.228361   43316 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:29:58.228924   43316 main.go:141] libmachine: Using API Version  1
	I0924 00:29:58.228943   43316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:29:58.229468   43316 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:29:58.229687   43316 main.go:141] libmachine: (multinode-246036) Calling .GetState
	I0924 00:29:58.231349   43316 status.go:364] multinode-246036 host status = "Running" (err=<nil>)
	I0924 00:29:58.231365   43316 host.go:66] Checking if "multinode-246036" exists ...
	I0924 00:29:58.231789   43316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:29:58.231839   43316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:29:58.247173   43316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0924 00:29:58.247644   43316 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:29:58.248145   43316 main.go:141] libmachine: Using API Version  1
	I0924 00:29:58.248171   43316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:29:58.248512   43316 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:29:58.248678   43316 main.go:141] libmachine: (multinode-246036) Calling .GetIP
	I0924 00:29:58.251596   43316 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:29:58.252080   43316 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:29:58.252123   43316 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:29:58.252360   43316 host.go:66] Checking if "multinode-246036" exists ...
	I0924 00:29:58.252673   43316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:29:58.252724   43316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:29:58.268167   43316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I0924 00:29:58.268563   43316 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:29:58.269028   43316 main.go:141] libmachine: Using API Version  1
	I0924 00:29:58.269049   43316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:29:58.269420   43316 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:29:58.269611   43316 main.go:141] libmachine: (multinode-246036) Calling .DriverName
	I0924 00:29:58.269774   43316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:29:58.269796   43316 main.go:141] libmachine: (multinode-246036) Calling .GetSSHHostname
	I0924 00:29:58.272826   43316 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:29:58.273283   43316 main.go:141] libmachine: (multinode-246036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:54:2a", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:27:24 +0000 UTC Type:0 Mac:52:54:00:a5:54:2a Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-246036 Clientid:01:52:54:00:a5:54:2a}
	I0924 00:29:58.273313   43316 main.go:141] libmachine: (multinode-246036) DBG | domain multinode-246036 has defined IP address 192.168.39.199 and MAC address 52:54:00:a5:54:2a in network mk-multinode-246036
	I0924 00:29:58.273491   43316 main.go:141] libmachine: (multinode-246036) Calling .GetSSHPort
	I0924 00:29:58.273671   43316 main.go:141] libmachine: (multinode-246036) Calling .GetSSHKeyPath
	I0924 00:29:58.273818   43316 main.go:141] libmachine: (multinode-246036) Calling .GetSSHUsername
	I0924 00:29:58.273981   43316 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036/id_rsa Username:docker}
	I0924 00:29:58.359913   43316 ssh_runner.go:195] Run: systemctl --version
	I0924 00:29:58.365954   43316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:29:58.383951   43316 kubeconfig.go:125] found "multinode-246036" server: "https://192.168.39.199:8443"
	I0924 00:29:58.383985   43316 api_server.go:166] Checking apiserver status ...
	I0924 00:29:58.384023   43316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:29:58.400084   43316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup
	W0924 00:29:58.409708   43316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0924 00:29:58.409775   43316 ssh_runner.go:195] Run: ls
	I0924 00:29:58.414329   43316 api_server.go:253] Checking apiserver healthz at https://192.168.39.199:8443/healthz ...
	I0924 00:29:58.418571   43316 api_server.go:279] https://192.168.39.199:8443/healthz returned 200:
	ok
	I0924 00:29:58.418599   43316 status.go:456] multinode-246036 apiserver status = Running (err=<nil>)
	I0924 00:29:58.418611   43316 status.go:176] multinode-246036 status: &{Name:multinode-246036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:29:58.418630   43316 status.go:174] checking status of multinode-246036-m02 ...
	I0924 00:29:58.418990   43316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:29:58.419029   43316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:29:58.435673   43316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0924 00:29:58.436121   43316 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:29:58.436728   43316 main.go:141] libmachine: Using API Version  1
	I0924 00:29:58.436752   43316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:29:58.437101   43316 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:29:58.437286   43316 main.go:141] libmachine: (multinode-246036-m02) Calling .GetState
	I0924 00:29:58.438981   43316 status.go:364] multinode-246036-m02 host status = "Running" (err=<nil>)
	I0924 00:29:58.438998   43316 host.go:66] Checking if "multinode-246036-m02" exists ...
	I0924 00:29:58.439296   43316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:29:58.439372   43316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:29:58.455052   43316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44777
	I0924 00:29:58.455680   43316 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:29:58.456130   43316 main.go:141] libmachine: Using API Version  1
	I0924 00:29:58.456154   43316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:29:58.456575   43316 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:29:58.456772   43316 main.go:141] libmachine: (multinode-246036-m02) Calling .GetIP
	I0924 00:29:58.460024   43316 main.go:141] libmachine: (multinode-246036-m02) DBG | domain multinode-246036-m02 has defined MAC address 52:54:00:6b:fb:1e in network mk-multinode-246036
	I0924 00:29:58.460483   43316 main.go:141] libmachine: (multinode-246036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:fb:1e", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:28:20 +0000 UTC Type:0 Mac:52:54:00:6b:fb:1e Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-246036-m02 Clientid:01:52:54:00:6b:fb:1e}
	I0924 00:29:58.460505   43316 main.go:141] libmachine: (multinode-246036-m02) DBG | domain multinode-246036-m02 has defined IP address 192.168.39.150 and MAC address 52:54:00:6b:fb:1e in network mk-multinode-246036
	I0924 00:29:58.460684   43316 host.go:66] Checking if "multinode-246036-m02" exists ...
	I0924 00:29:58.460997   43316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:29:58.461031   43316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:29:58.476813   43316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0924 00:29:58.477330   43316 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:29:58.477849   43316 main.go:141] libmachine: Using API Version  1
	I0924 00:29:58.477869   43316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:29:58.478182   43316 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:29:58.478361   43316 main.go:141] libmachine: (multinode-246036-m02) Calling .DriverName
	I0924 00:29:58.478547   43316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:29:58.478568   43316 main.go:141] libmachine: (multinode-246036-m02) Calling .GetSSHHostname
	I0924 00:29:58.481163   43316 main.go:141] libmachine: (multinode-246036-m02) DBG | domain multinode-246036-m02 has defined MAC address 52:54:00:6b:fb:1e in network mk-multinode-246036
	I0924 00:29:58.481577   43316 main.go:141] libmachine: (multinode-246036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:fb:1e", ip: ""} in network mk-multinode-246036: {Iface:virbr1 ExpiryTime:2024-09-24 01:28:20 +0000 UTC Type:0 Mac:52:54:00:6b:fb:1e Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-246036-m02 Clientid:01:52:54:00:6b:fb:1e}
	I0924 00:29:58.481612   43316 main.go:141] libmachine: (multinode-246036-m02) DBG | domain multinode-246036-m02 has defined IP address 192.168.39.150 and MAC address 52:54:00:6b:fb:1e in network mk-multinode-246036
	I0924 00:29:58.481720   43316 main.go:141] libmachine: (multinode-246036-m02) Calling .GetSSHPort
	I0924 00:29:58.481880   43316 main.go:141] libmachine: (multinode-246036-m02) Calling .GetSSHKeyPath
	I0924 00:29:58.482036   43316 main.go:141] libmachine: (multinode-246036-m02) Calling .GetSSHUsername
	I0924 00:29:58.482170   43316 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19696-7623/.minikube/machines/multinode-246036-m02/id_rsa Username:docker}
	I0924 00:29:58.563040   43316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:29:58.577734   43316 status.go:176] multinode-246036-m02 status: &{Name:multinode-246036-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:29:58.577771   43316 status.go:174] checking status of multinode-246036-m03 ...
	I0924 00:29:58.578089   43316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0924 00:29:58.578134   43316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0924 00:29:58.593902   43316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
	I0924 00:29:58.594400   43316 main.go:141] libmachine: () Calling .GetVersion
	I0924 00:29:58.594898   43316 main.go:141] libmachine: Using API Version  1
	I0924 00:29:58.594920   43316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0924 00:29:58.595207   43316 main.go:141] libmachine: () Calling .GetMachineName
	I0924 00:29:58.595411   43316 main.go:141] libmachine: (multinode-246036-m03) Calling .GetState
	I0924 00:29:58.597042   43316 status.go:364] multinode-246036-m03 host status = "Stopped" (err=<nil>)
	I0924 00:29:58.597056   43316 status.go:377] host is not running, skipping remaining checks
	I0924 00:29:58.597069   43316 status.go:176] multinode-246036-m03 status: &{Name:multinode-246036-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-246036 node start m03 -v=7 --alsologtostderr: (38.824427751s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-246036 node delete m03: (1.48788464s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (176.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-246036 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0924 00:38:38.366383   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:40:43.333131   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-246036 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m55.679763988s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-246036 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (176.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-246036
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-246036-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-246036-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.089611ms)

                                                
                                                
-- stdout --
	* [multinode-246036-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-246036-m02' is duplicated with machine name 'multinode-246036-m02' in profile 'multinode-246036'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-246036-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-246036-m03 --driver=kvm2  --container-runtime=crio: (43.50108091s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-246036
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-246036: exit status 80 (218.069821ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-246036 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-246036-m03 already exists in multinode-246036-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-246036-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.81s)

                                                
                                    
x
+
TestScheduledStopUnix (113.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-074710 --memory=2048 --driver=kvm2  --container-runtime=crio
E0924 00:45:26.409465   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:45:43.336462   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-074710 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.007163144s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074710 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-074710 -n scheduled-stop-074710
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074710 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0924 00:46:03.620034   14793 retry.go:31] will retry after 113.364µs: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.621217   14793 retry.go:31] will retry after 204.207µs: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.622361   14793 retry.go:31] will retry after 310.586µs: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.623528   14793 retry.go:31] will retry after 193.302µs: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.624654   14793 retry.go:31] will retry after 283.446µs: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.625804   14793 retry.go:31] will retry after 485.362µs: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.626926   14793 retry.go:31] will retry after 1.146075ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.629114   14793 retry.go:31] will retry after 2.223558ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.632360   14793 retry.go:31] will retry after 2.77102ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.635580   14793 retry.go:31] will retry after 1.936775ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.637785   14793 retry.go:31] will retry after 6.392801ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.645051   14793 retry.go:31] will retry after 9.631025ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.655311   14793 retry.go:31] will retry after 18.311142ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.674592   14793 retry.go:31] will retry after 21.365405ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
I0924 00:46:03.696877   14793 retry.go:31] will retry after 31.173891ms: open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/scheduled-stop-074710/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074710 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-074710 -n scheduled-stop-074710
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-074710
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074710 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-074710
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-074710: exit status 7 (64.3369ms)

                                                
                                                
-- stdout --
	scheduled-stop-074710
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-074710 -n scheduled-stop-074710
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-074710 -n scheduled-stop-074710: exit status 7 (64.548733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-074710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-074710
--- PASS: TestScheduledStopUnix (113.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (283.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1470896657 start -p running-upgrade-216884 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1470896657 start -p running-upgrade-216884 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.820922489s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-216884 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-216884 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m36.961329124s)
helpers_test.go:175: Cleaning up "running-upgrade-216884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-216884
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-216884: (1.341903401s)
--- PASS: TestRunningBinaryUpgrade (283.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-198857 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-198857 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.785281ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-198857] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (111.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-587180 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-587180 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m51.945340944s)
--- PASS: TestPause/serial/Start (111.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (91.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-198857 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-198857 --driver=kvm2  --container-runtime=crio: (1m30.891381212s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-198857 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (91.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (116.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-198857 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-198857 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m55.60501133s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-198857 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-198857 status -o json: exit status 2 (224.77098ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-198857","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-198857
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (116.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (126.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-587180 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-587180 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m6.619096068s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (126.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-198857 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0924 00:50:43.332637   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-198857 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.028530149s)
--- PASS: TestNoKubernetes/serial/Start (33.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-587180 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-587180 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-587180 --output=json --layout=cluster: exit status 2 (246.999159ms)

                                                
                                                
-- stdout --
	{"Name":"pause-587180","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-587180","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-587180 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-587180 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-198857 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-198857 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.980179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-587180 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-587180 --alsologtostderr -v=5: (1.013010068s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.806547292s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.71375456s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.78401842s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-198857
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-198857: (1.291236537s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-447054 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-447054 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.863227ms)

                                                
                                                
-- stdout --
	* [false-447054] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:52:04.000685   54333 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:52:04.000797   54333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:52:04.000808   54333 out.go:358] Setting ErrFile to fd 2...
	I0924 00:52:04.000813   54333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:52:04.001005   54333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7623/.minikube/bin
	I0924 00:52:04.001590   54333 out.go:352] Setting JSON to false
	I0924 00:52:04.003136   54333 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5668,"bootTime":1727133456,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 00:52:04.003252   54333 start.go:139] virtualization: kvm guest
	I0924 00:52:04.006074   54333 out.go:177] * [false-447054] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0924 00:52:04.007501   54333 notify.go:220] Checking for updates...
	I0924 00:52:04.007519   54333 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:52:04.009144   54333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:52:04.010679   54333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7623/kubeconfig
	I0924 00:52:04.012187   54333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7623/.minikube
	I0924 00:52:04.013669   54333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 00:52:04.014999   54333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:52:04.017098   54333 config.go:182] Loaded profile config "NoKubernetes-198857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0924 00:52:04.017306   54333 config.go:182] Loaded profile config "force-systemd-env-762606": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0924 00:52:04.017431   54333 config.go:182] Loaded profile config "kubernetes-upgrade-619300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0924 00:52:04.017552   54333 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:52:04.055863   54333 out.go:177] * Using the kvm2 driver based on user configuration
	I0924 00:52:04.057294   54333 start.go:297] selected driver: kvm2
	I0924 00:52:04.057309   54333 start.go:901] validating driver "kvm2" against <nil>
	I0924 00:52:04.057322   54333 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:52:04.059542   54333 out.go:201] 
	W0924 00:52:04.060930   54333 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0924 00:52:04.062394   54333 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-447054 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-447054" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-447054

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447054"

                                                
                                                
----------------------- debugLogs end: false-447054 [took: 3.027560327s] --------------------------------
helpers_test.go:175: Cleaning up "false-447054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-447054
--- PASS: TestNetworkPlugins/group/false (3.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (160.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.820212222 start -p stopped-upgrade-075175 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.820212222 start -p stopped-upgrade-075175 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m42.471995431s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.820212222 -p stopped-upgrade-075175 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.820212222 -p stopped-upgrade-075175 stop: (2.160362498s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-075175 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-075175 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.420488321s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (160.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-075175
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-674057 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-674057 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m14.645459746s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-650507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0924 00:55:43.332583   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-650507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m18.358583781s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-674057 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a59c44b3-e050-4a26-9073-1e503898be99] Pending
helpers_test.go:344: "busybox" [a59c44b3-e050-4a26-9073-1e503898be99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a59c44b3-e050-4a26-9073-1e503898be99] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004212322s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-674057 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-674057 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-674057 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-650507 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [08cee91e-9f7f-4af3-8f9e-f21b21ac1116] Pending
helpers_test.go:344: "busybox" [08cee91e-9f7f-4af3-8f9e-f21b21ac1116] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [08cee91e-9f7f-4af3-8f9e-f21b21ac1116] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004355847s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-650507 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-465341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-465341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (55.610422028s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-650507 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-650507 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b2640213-e0c5-4e24-ab47-40ae93cf2dec] Pending
helpers_test.go:344: "busybox" [b2640213-e0c5-4e24-ab47-40ae93cf2dec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b2640213-e0c5-4e24-ab47-40ae93cf2dec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.006895684s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-465341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-465341 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (717.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-674057 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-674057 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m56.929870371s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-674057 -n no-preload-674057
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (717.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (601.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-650507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-650507 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m0.818605445s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650507 -n embed-certs-650507
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (601.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (528.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-465341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-465341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m48.128102734s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-465341 -n default-k8s-diff-port-465341
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (528.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-171598 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-171598 --alsologtostderr -v=3: (3.285129948s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-171598 -n old-k8s-version-171598: exit status 7 (63.998216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-171598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-185978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-185978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.145343069s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m22.235087903s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m15.599422409s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-185978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-185978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.280755722s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-185978 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-185978 --alsologtostderr -v=3: (10.811798499s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185978 -n newest-cni-185978
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185978 -n newest-cni-185978: exit status 7 (87.697218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-185978 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-185978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0924 01:25:43.333278   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/addons-823099/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.080944   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.087291   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.098689   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.120169   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.161638   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.243719   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.406117   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:15.727941   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:16.370491   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:17.652473   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:20.214019   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:25.336036   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-185978 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (52.655689723s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185978 -n newest-cni-185978
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-447054 "pgrep -a kubelet"
I0924 01:26:25.945123   14793 config.go:182] Loaded profile config "auto-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-447054 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t4vzh" [991e5a36-e6bd-44f4-816b-5cd27edcd787] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t4vzh" [991e5a36-e6bd-44f4-816b-5cd27edcd787] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.014924103s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qq9rr" [2160e9e5-753b-4e24-af3f-876ea1abff11] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0079116s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-185978 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-185978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-185978 --alsologtostderr -v=1: (1.70196406s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185978 -n newest-cni-185978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185978 -n newest-cni-185978: exit status 2 (309.096024ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185978 -n newest-cni-185978
I0924 01:26:35.055099   14793 config.go:182] Loaded profile config "kindnet-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185978 -n newest-cni-185978: exit status 2 (442.999163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-185978 --alsologtostderr -v=1
E0924 01:26:35.577832   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-185978 --alsologtostderr -v=1: (1.03483564s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185978 -n newest-cni-185978
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185978 -n newest-cni-185978
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.59s)
E0924 01:28:48.572315   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m23.344939478s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-447054 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-447054 replace --force -f testdata/netcat-deployment.yaml
I0924 01:26:35.358638   14793 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wwc66" [b3355c1e-7ad0-407f-96f3-ab2671e4d8ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wwc66" [b3355c1e-7ad0-407f-96f3-ab2671e4d8ff] Running
E0924 01:26:41.437032   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004888001s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-447054 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (98.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m38.317287959s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (98.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-447054 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0924 01:26:56.059119   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m22.583924533s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (123.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0924 01:27:26.632468   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:26.638809   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:26.650246   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:26.671867   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:26.713843   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:26.795464   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:26.957075   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:27.279285   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:27.920728   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:29.202472   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:31.764598   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:36.886498   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:37.020948   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/no-preload-674057/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.128675   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.479433   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.485868   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.497281   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.518705   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.560436   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.642172   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:47.804163   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:48.126010   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:48.768232   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:50.050257   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:52.612873   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:57.734406   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m3.521337685s)
--- PASS: TestNetworkPlugins/group/flannel/Start (123.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-c6494" [33662a3a-848d-4f6b-bb8f-726b7026fd3b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006761079s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-447054 "pgrep -a kubelet"
I0924 01:28:04.123274   14793 config.go:182] Loaded profile config "calico-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-447054 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cctxs" [2c8efb84-62c5-4259-ab50-e065549aa75e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 01:28:07.610528   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:28:07.975778   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-cctxs" [2c8efb84-62c5-4259-ab50-e065549aa75e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005339486s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-447054 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-447054 "pgrep -a kubelet"
I0924 01:28:16.977830   14793 config.go:182] Loaded profile config "enable-default-cni-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-447054 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f9969" [b108f5ec-8a51-4e5c-b962-8646b6a481c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f9969" [b108f5ec-8a51-4e5c-b962-8646b6a481c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004913821s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-447054 "pgrep -a kubelet"
I0924 01:28:17.934940   14793 config.go:182] Loaded profile config "custom-flannel-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-447054 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2rb56" [6b21b869-cbad-4ce1-acb1-ec5da0916f8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2rb56" [6b21b869-cbad-4ce1-acb1-ec5da0916f8c] Running
E0924 01:28:28.457459   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004431868s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-447054 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-447054 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159808542s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0924 01:28:44.363354   14793 retry.go:31] will retry after 739.803449ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-447054 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-447054 exec deployment/netcat -- nslookup kubernetes.default: (10.195418936s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-447054 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0924 01:28:38.361887   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/functional-666615/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-447054 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m26.858662641s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9vxzn" [436b0d47-c443-47e5-b13b-1efb3c1170f7] Running
E0924 01:29:09.418802   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/old-k8s-version-171598/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006172354s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-447054 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-447054 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-66dct" [cc6019e9-76a7-44a6-90ad-81af5cf352d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-66dct" [cc6019e9-76a7-44a6-90ad-81af5cf352d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003686902s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-447054 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-447054 "pgrep -a kubelet"
I0924 01:30:00.431121   14793 config.go:182] Loaded profile config "bridge-447054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-447054 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-64g9f" [77a23ed8-97c0-4e95-af12-bb579d921c00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-64g9f" [77a23ed8-97c0-4e95-af12-bb579d921c00] Running
E0924 01:30:10.495154   14793 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7623/.minikube/profiles/default-k8s-diff-port-465341/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004146397s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-447054 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-447054 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (37/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
256 TestStartStop/group/disable-driver-mounts 0.15
276 TestNetworkPlugins/group/kubenet 5.73
284 TestNetworkPlugins/group/cilium 3.28
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-319683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-319683
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-447054 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-447054" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-447054

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447054"

                                                
                                                
----------------------- debugLogs end: kubenet-447054 [took: 5.575305091s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-447054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-447054
--- SKIP: TestNetworkPlugins/group/kubenet (5.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-447054 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-447054" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-447054

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-447054" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447054"

                                                
                                                
----------------------- debugLogs end: cilium-447054 [took: 3.122962585s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-447054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-447054
--- SKIP: TestNetworkPlugins/group/cilium (3.28s)

                                                
                                    
Copied to clipboard